Project RSS Feeds
VLDB is a premier annual international forum for data management and database researchers, vendors, practitioners, application developers, and users. The conference will feature research talks, tutorials, demonstrations, and workshops. It will cover current issues in data management, database and information systems research. Data management and databases remain among the main technological cornerstones of emerging applications of the twenty-first century.
VLDB2014 will take place atHangzhou, which is one of the best tourism cities in China. Hangzhou is also one of the eight ancient capitals in Chinese history and one in the first group of National Famous Historical and Cultural Cities. The West Lake in Hangzhou, known as the "earthly paradise", is one of the top attractions in China and abroad. Besides, Hangzhou is the hometown of tea and silk. Longjing Green Tea is the No. 1 among the top ten Chinese teas. attracts many tourists every year, and offers numerous opportunities for sightseeing (e.g., West Lake, Lingyin Temple, Qiantang River), outdoor activities (e.g., hiking, cycling, boating), cluture tasting (e.g., tea, silk, traditional cuisine, Grand Canal ), as well as fun (e.g., Songcheng Theme Park, "Impression of West Lake" light opera).
More at http://www.vldb.org/2014/
co-located with VLDB 2014
Following the 1st International workshop on Benchmarking RDF Systems (BeRSys 2013) the aim of the BeRSys 2014 workshop is to provide a discussion forum where researchers and industrials can meet to discuss topics related to the performance of RDF systems. BeRSys 2014 is the only workshop dedicated to benchmarking different aspects of RDF engines - in the line of TPCTC series of workshops.The focus of the workshop is to expose and initiate discussions on best practices, different application needs and scenarios related to different aspects of RDF data management.
Communicating straight with customers, in an efficient and effective manner is still a desideratum for small, medium and large companies alike. With the rapid development of ICT technologies, including Internet, Web-based communication, and recently Social Media, the number of possibilities to interact directly with customers has grown event larger. For small and medium companies in general and those in the touristic industry in particular, direct communication with customers has been always seen as an enabler for massive direct sales. However, these expectations are still to be fulfilled. In this talk we analyze the causes of this failure and propose a solution on how effective and efficient communication for touristic industry should be realized. We perform an empirical analysis of the usage pattern of Internet technology in the touristic domain. We introduce first the challenges faced nowadays by touristic service providers in terms of online and mobile booking, commission payments and social media. We then have a look at key technologies and communication channels that touristic service providers must use in order to be highly visible online including static, dynamic, sharing, collaboration, social media, fora, vocabularies, semantic formats, etc. We analyze the uptake of these technologies and channels by hotels, hotels chains, destinations and booking channels and point out on one hand how well intermediaries i.e. booking channels (e.g. booking.com, hrs.de, etc.) are using them, while on the other hand, touristic service providers are failing. Finally, we show how these technologies and communication channels should be used for effective direct marketing in the touristic industry by means of a real pilot developed in collaboration with the touristic association of service providers located in the city of Innsbruck and its surroundings i.e. Tourismusverband Innsbruck (TVb).
Download slides here
New York, US
More information to appear at http://www.kdd.org/kdd2014/
What are best examples of data-driven Web applications you've ever seen? The updates to Open Street Map after the Haiti earthquake? The mapping of all 9,966,539 buildings in the Netherlands? The NHS Prescription data? Things like SF Park that help you 'park your car smarter' in San Francisco using real time data? Bing maps and Google Earth?
All these and many, many more data-driven applications have geospatial information at their core. Very often the common factor across multiple data sets is the location data, and maps are crucial in visualizing correlations between data sets that may otherwise be hidden.
It's this desire to work with multiple data sets in different formats about different topics and link those with the powerful technologies used in geospatial information systems that is behind the linking geospatial data workshop.
How can geographic information best be integrated with other data on the Web? How can we discover that different facts in different data sets relate to the same place, especially when 'place' can be expressed in different ways and at different levels of granularity?
On behalf of the Smart Open Data project, the World Wide Web Consortium (W3C), in partnership with the Open Geospatial Consortium (OGC) and the OGC GeoSPARQL Standards Working Group, the UK Government Linked Data Working Group, Google and Ordnance Survey, invite you to share your experiences, successes and frustrations in using GI.
The workshop is open to all and will take place at Campus London on Wednesday 5th - Thursday 6th March, 2014.
The tutorial provides a comprehensive view of the RDF-Stream Processing (RSP) research area. It consists of four parts. The first one introduces the RSP basic concepts: RDF streams to represent temporally-ordered sequence of data items; continuous SPARQL extensions to query RDF streams, and RSP engines to execute continuous query answering over RDF streams. The second part presents the available RSP engine implementations. It starts with an overview on the existing RSP engines, highlighting similarities and differences among them. Next, two existing implementations are analysed in depth: C-SPARQL and SPARQLstream. The third part is a hands-on session where the attendees learn how to (1) use the three presented RSP engines presented above and (2) let the systems interact among them. Finally, the fourth part of the tutorial provides an overview on RSP-related topics: RSP engine benchmarking, stream reasoning and real-world deployments. The tutorial closes with a discussion on the open challenges and the research problems of this research field.
The Centre for Applied Linguistics of the Santiago de Cuba’s branch of the Ministry of Science, Technology and the Environment, is pleased to announce the Fourteenth International Symposium on Social Communication. The event will be held in Santiago de Cuba, January 19 through the 23, 2015 and in this occasion will be dedicated to the 500 years of the foundation of the Santiago de Cuba's city. This interdisciplinary event will focus on social communication processes from the points of view of Linguistics, Computational Linguistics, Medicine, Mass Media, and Art, Ethnology and Folklore.
In the context of the XIV Symposium, will be held also the Workshop "Resources and tools of the Spanish and Portuguese languages and his variants in Latin America" sponsored by the Centre for Applied Linguistics and the Spanish Association on Natural Language Processing (SEPLN). The aims of the workshop are to know the new tools on NLP developed in the Spanish-speaking countries and Portuguese of Latin America and to know about Linguistic studies on Latin-America where NLP's instruments are applied.
More information to appear on the website: http://www.santiago.cu/hosting/linguistica/index.php?id=en
New York, US
More information to appear at the conference website
Download slides here
Download the slides here
The evolution of ontologies is an undisputed necessity in current research community. The problem of understanding this evolution is a fundamental problem as, based on this understanding, maintainers of depending artifacts need to take a decision about possible changes. Moreover, as ontologies are often developed by several ontology engineers, it is also important for them to understand what changes have been made by each other. Recent research focuses on just identifying and presenting the changes from one ontology version to another. In this paper, we argue that this is not enough and that we need more fine-grained methods for understanding how the ontology evolved. To this direction, we present a module, named ProvenanceTracker, which gets as input the list of changes between two or more RDF/S ontology versions and can answer fine-grained provenance queries about ontology resources. Our module can identify when a resource was created and how. The sequence of changes that led to the creation of that specific resource can be identified and presented to the user. We evaluate the time complexity of our approach and show that it can possibly reduce the human effort spent on understanding ontology evolution.AttachmentSize paper-30.pdf942.37 KB
In this work we describe and evaluate Hippalus, a system that offers exploratory search enriched with preferences. Hippalus supports the very popular interaction model of Faceted and Dynamic Taxonomies (FDT), enriched with user actions which allow the users to express their preferences. The underlying preference framework allows expressing preferences over attributes (facets), whose values can be hierarchically valued and/or multi-valued, and offers automatic conflict resolution. To evaluate the system we conducted a user study with a number of tasks related to a "car selection" scenario. The results of the comparative evaluation, with and without the preference actions, were impressive: with the preference-enriched FDT, all users completed all the tasks successfully in 1/3 of the time, performing 1/3 of the actions compared to the plain FDT. Moreover all users (either plain or expert) preferred the preference enriched interface. The benefits are also evident through various other metrics.AttachmentSize Papadakos_2014_ExploreDB.pdf1.76 MB
We report on our experiences with integrating geospatial datasets using Linked Data technologies. We describe NeoGeo, an integration vocabulary, and an integration scenario involving two geospatial
datasets: the GADM database of Global Administrative Areas and NUTS, the Nomenclature of Territorial Units for Statistics. We identify the need for provenance to be able to correctly interpret query results over the integrated dataset.AttachmentSize lgd14_submission_54.pdf272.52 KB
In this paper we describe Map4RDFiOS, a tool that allows visualizing and navigating through RDFbased geographic datasets available via a SPARQL endpoint, as well as connecting that data with statistical data represented with the W3C DataCube vocabulary or sensor data represented with the W3C Semantic Sensor Network ontology.AttachmentSize lgd14_submission_46.pdf5.56 MB
As the amount of available linked data expand and the number of related applications increases, the management of aspects such as provenance and access control of such data begin to become an issue. Current approaches do not provide sufficient support for automatic reasoning over different metadata and their possible interdependencies. MetaReasons is a framework that supports the representation of metadata in a logical formalism and consequently to support automated reasoning on metadata. Different types of metadata, such as data-provenance and accessibility-restrictions are represented as distinct meta-theories, and dependencies between types of metadata are represented by rules between different meta-theories. In this paper we present the logic based definition of the MetaReasons framework and two examples of meta-theories for provenance and access control. Moreover, we propose a materialization calculus for concrete forward reasoning on the two aspects.AttachmentSize TR-FBK-DKM-2014-01.pdf689.22 KB