Data Set 3.8

Dataset category: 
Publication Year: 

The DBpedia data set uses a large multi-domain ontology which has been derived from Wikipedia. The English version of the DBpedia 3.8 data set describes 3.77 million "things" with 400 million "facts".

In addition, we provide localized versions of DBpedia in 111 languages. All these versions together describe 20.8 million things, out of which 10.5 million overlap (are interlinked) with concepts from the English DBpedia. The full DBpedia data set features labels and abstracts for 10.3 million unique things in up to 111 different languages; 8.0 million links to images and 24.4 million HTML links to external web pages; 27.2 million data links into external RDF data sets, 55.8 million links to Wikipedia categories, and 8.2 million YAGO categories. The dataset consists of 1.89 billion pieces of information (RDF triples) out of which 400 million were extracted from the English edition of Wikipedia, 1.46 billion were extracted from other language editions, and about 27 million are data links to external RDF data sets.


1 Background

Wikipedia has grown into one of the central knowledge sources of mankind and is maintained by thousands of contributors. Wikipedia articles consist mostly of free text, but also contain different types of structured information, such as infobox templatescategorisation information, images, geo-coordinates, and links to external Web pages. For instance, the figure below shows the source code and the visualisation of an infobox template containing structured information about the town of Innsbruck.


The DBpedia project extracts various kinds of structured information from Wikipedia editions in 111 languages and combines this information into a huge, cross-domain knowledge base.

DBpedia uses the Resource Description Framework (RDF) as a flexible data model for representing extracted information and for publishing it on the Web. We use the SPARQL query language to query this data. Please refer to the Developers Guide to Semantic Web Toolkits to find a development toolkit in your preferred programming language to process DBpedia data.

2 Content of the DBpedia Data Set

The English version of the DBpedia knowledge base currently describes 3.77 million things, out of which 2.35 million are classified in a consistent Ontology, including 764,000 persons, 573,000 places (including 387,000 populated places), 333,000 creative works (including 112,000 music albums, 72,000 films and 18,000 video games), 192,000 organizations (including 45,000 companies and 42,000 educational institutions), 202,000 species and 5,500 diseases.

In addition, we provide localized versions of DBpedia in 111 languages. All these versions together describe 20.8 million things, out of which 10.5 million overlap with the concepts from the English Dbpedia. The full DBpedia data set features labels and abstracts for 10.3 million unique things in 111 different languages; 8.0 million links to images and 24.4 million links to external web pages; 27.2 million external links into other RDF datasets, 55.8 million links to Wikipedia categories, and 8.2 million YAGO categories. The dataset consists of 1.89 billion pieces of information (RDF triples) out of which 400 million were extracted from the English edition of Wikipedia, 1.46 billion were extracted from other language editions, and about 30 million are links to external datasets. Detailed statistics about the DBpedia datasets in 22 popular languages are provided at Dataset Statistics.

The table below contains links to some example "things" from the data set:


Class Examples
City CambridgeBerlinManchester
Country SpainIcelandSouth Korea
Politician George W. BushNicolas SarkozyAngela Merkel
Musician AC/DC,Diana RossRöyksopp
Music album Led Zeppelin IIILike a VirginThriller
Director Woody AllenOliver StoneTakashi Miike
Film Pulp FictionHysterical BlindnessBreakfast at Tiffany's
Book The Lord of the RingsThe Adventures of Tom Sawyer, the Bible
Computer Game TetrisWorld of WarcraftSam & Max hit the Road
Technical Standard HTMLRDFURI

You can also use Richard Cyganiak's PHP script to view random things from the DBpedia data set.

Find the properties used in the different DBpedia data sets here?.

3 Denoting or Naming "things"

Each thing in the DBpedia data set is denoted by a de-referenceable IRI- or URI-based reference of the form, where Name is derived from the URL of the source Wikipedia article, which has the form Thus, each DBpedia entity is tied directly to a Wikipedia article. Every DBpedia entity name resolves to a description-oriented Web document (or Web resource).

Until DBpedia release 3.6, we only used article names from the English Wikipedia, but since DBpedia release 3.7, we also provide localized datasets that contain IRIs like, where xx is a Wikipedia language code and Name is taken from the source URL,

Starting with DBpedia release 3.8, we use IRIs for most DBpedia entity names. IRIs are more readable and generally preferable to URIs, but for backwards compatibility, we still use URIs for DBpedia resources extracted from the English Wikipedia and IRIs for all other languages. Triples in Turtle files use IRIs for all languages, even for English.

There are several details on the encoding of URIs that should always be taken into account.

4 Describing "things"

Each DBpedia entity is described by various properties. Below, we give an overview about the most important types of properties.

4.1 Basic Information

Every DBpedia resource is described by a label, a short and long English abstract, a link to the corresponding Wikipedia page, and a link to an image depicting the thing (if available).

If a thing exists in multiple language versions of Wikipedia, then short and long abstracts within these languages and links to the different language Wikipedia pages are added to the description. The DBpedia data set contains the following numbers of abstracts per language (July 2012):


Language Number of Abstracts
English 3,770,000
German 1,244,000
French 1,197,000
Dutch 993,000
Italian 882,000
Spanish 879,000
Polish 848,000
Japanese 781,000
Portuguese 699,000
Swedish 457,000
Chinese 445,000

4.2 Classifications

DBpedia provides three different classification schemata for things.


  1. Wikipedia Categories are represented using the SKOS vocabulary and DCMI terms.
  2. The YAGO Classification is derived from the Wikipedia category system using WordNet. Please refer to Yago: A Core of Semantic Knowledge – Unifying WordNet and Wikipedia (PDF) for more details.
  3. WordNet Synset Links were generated by manually relating Wikipedia infobox templates and WordNet synsets, and adding a corresponding link to each thing that uses a specific template. In theory, this classification should be more precise then the Wikipedia category system.

Using these classifications within SPARQL queries allows you to select things of a certain type.

4.2.1 Wikipedia Categories

4.2.2 YAGO Classes

4.2.3 Wordnet

4.3 Infobox Data

Wikipedia infoboxes contain very specific information about things and are thus a very valuable source of structured information that can be used to ask expressive queries against Wikipedia. The DBpedia project currently extracts three different datasets from the Wikipedia infoboxes.


  1. The Infobox Dataset is created using our initial, now three year old infobox parsing approach. This extractor extracts all properties from all infoboxes and templates within all Wikipedia articles. Extracted information is represented using properties in the namespace. The names of the these properties directly reflect the name of the Wikipedia infobox property. Property names are not cleaned or merged. Property types are not part of a subsumption hierarchy and there is no consistent ontology for the infobox dataset. Currently, there are approximately 8000 different property types. The infobox extractor performs only a minimal amount of property value clean-up, e.g., by converting a value like "June 2009" to the XML Schema format "2009-06". You should therefore use the infobox dataset only if your application requires complete coverage of all Wikipeda properties and you are prepared to accept relatively noisy data.
  2. The Infobox Ontology. With the DBpedia 3.2 release, we introduced a new infobox extraction method which is based on hand-generated mappings of Wikipedia infoboxes/templates to a newly created DBpedia ontology. The mappings adjust weaknesses in the Wikipedia infobox system, like using different infoboxes for the same type of thing (class) or using different property names for the same property. Therefore, the instance data within the infobox ontology is much cleaner and better structured than the Infobox Dataset, but currently doesn't cover all infobox types and infobox properties within Wikipedia. Starting with DBpedia release 3.5, we provide three different Infobox Ontology data sets:
    • The Ontology Infobox Types dataset contains the rdf:types of the instances which have been extracted from the infoboxes.
    • The Ontology Infobox Properties dataset contains the actual data values that have been extracted from infoboxes. The data values are represented using ontology properties (e.g., 'volume') that may be applied to different things (e.g., the volume of a lake and the volume of a planet). This restricts the number of different properties to a minimum, but has the drawback that it is not possible to automatically infer the class of an entity based on a property. For instance, an application that discovers an entity described using the volume property cannot infer that that the entity is a lake and then for example use a map to visualize the entity. Properties are represented using properties following the{propertyname} naming schema. All values are normalized to their respective SI unit.
    • The Ontology Infobox Properties (Specific) dataset contains properties which have been specialized for a specific class using a specific unit. e.g. the property height is specialized on the class Person using the unit centimetres instead of metres. Specialized properties follow the{Class}/{property} naming schema (e.g. The properties have a single class as rdfs:domain and rdfs:range and can therefore be used for classification reasoning. This makes it easier to express queries against the data, e.g., finding all lakes whose volume is in a certain range. Typically, the range of the properties are not using SI units, but a unit which is more appropriate in the specific domain.

All three data sets are available for download as well as being available for queries via the DBpedia SPARQL endpoint.

The infobox data enables sophisticated, fine-grained queries over the data set. Some example queries are shown below:

4.3.1 Querying the Infobox Dataset

  •{ ?subject%20rdf:type%20. ?subject%20dbpedia2:starring%20. ?subject%20rdfs:comment%20?abstract. ?subject%20rdfs:label%20?label. FILTER(lang(?abstract)%20=%20"en"%20&&%20lang(?label)%20=%20"en"). ?subject%20%20?released. FILTER(xsd:date(?released)%20<%20"2000-01-01"^^xsd:date). }%20ORDER%20BY%20?released LIMIT%2020 ==Abstracts of movies starring Tom Cruise, released before 1999
  • %20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20?homepage%20 %20%20WHERE%20 %20%20%20%20{ %20%20%20%20%20%20?subject%20%20rdf:type%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20. %20%20%20%20%20%20?subject%20%20dbpedia2:numEmployees%20%20?employees %20%20%20%20%20%20%20%20FILTER%20%20(%20xsd:integer(?employees)%20>=%2050000%20)%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20. %20%20%20%20%20%20?subject%20%20foaf:homepage%20%20%20%20%20%20%20%20%20%20?homepage%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20. %20%20%20%20}%20 %20%20ORDER%20BY%20%20DESC(xsd:integer(?employees)) %20%20LIMIT%20%2020 ==The official websites of companies with more than 50000 employees
  •{ ?subject%20rdf:type%20. ?subject%20%20?population. FILTER%20(xsd:integer(?population)%20>%202000000) } ORDER%20BY%20DESC(xsd:integer(?population)) LIMIT%2020 ==Cities with more than 2 million habitants

4.3.2 Querying the Infobox Ontology

List all episodes of the HBO television series The Sopranos ordered by their air-date:


      ?e <>         <>  .
      ?e <>    ?date                                       .
      ?e <>  ?number                                     .
      ?e <>   ?season
  ORDER BY DESC(?date) where { ?e%20%20%20. ?e%20%20?date%20. ?e%20%20?number%20. } order%20by%20desc(?date)&format=text/html&debug=on ==SPARQL Result

Software developed by an organisation founded in California:


      ?company  a                                              <>  .
      ?company  <>  <>    .
      ?product  <>        ?company                                    .
      ?product  a                                              <>
    } where {%20?company%20a%20%20. ?company%20%20%20. ?product%20%20?company%20%20. ?product%20%20a%20%20. } &format=text/html&debug=on ==SPARQL Result

4.4 External Links

The DBpedia data set contains HTML links to external web pages as well as RDF links into external data sources.

There are two types of links to HTML pages: dbpedia:reference links point to several web pages about a thing. In addition, some things also have foaf:homepage links that point to web pages that can be considered the "official homepage" of a thing.

RDF links are represented using the owl:sameAs property. Please refer to Interlinking for more information about RDF links and the interlinked data sets.

4.4.1 FOAF Homepage

4.4.2 owl:sameAs Links

4.5 Geo-Coordinates

The DBpedia data set contains geo-coordinates for 986,000 geographic locations. Geo-coordinates are expressed using the W3C Basic Geo Vocabulary.

Besides simple listings of geo-coordinates (e.g., SELECT%20?subject%20?lat%20?long%20WHERE%20{ ?subject%20%20. ?subject%20geo:lat%20?lat. ?subject%20geo:long%20?long. }%20LIMIT%2020 ==German soccer stadiums ), the new geo-coordinates allow sophisticated queries, like "show me all things next to the":

  • SELECT%20?subject%20?label%20?lat%20?long%20WHERE%20{ %20geo:lat%20?eiffelLat. %20geo:long%20?eiffelLong. ?subject%20geo:lat%20?lat. ?subject%20geo:long%20?long. ?subject%20rdfs:label%20?label. FILTER(?lat%20-%20?eiffelLat%20<=%200.05%20&&%20?eiffelLat%20-%20?lat%20<=%200.05%20&& ?long%20-%20?eiffelLong%20<=%200.05%20&&%20?eiffelLong%20-%20?long%20<=%200.05%20&& lang(?label)%20=%20"en" ). }%20LIMIT%2020 ==Eiffel Tower
  • SELECT%20?subject%20?label%20?lat%20?long%20WHERE%20{ %20geo:lat%20?brandenburgLat. %20geo:long%20?brandenburgLong. ?subject%20geo:lat%20?lat. ?subject%20geo:long%20?long. ?subject%20rdfs:label%20?label. FILTER(?lat%20-%20?brandenburgLat%20<=%200.05%20&&%20?brandenburgLat%20-%20?lat%20<=%200.05%20&& ?long%20-%20?brandenburgLong%20<=%200.05%20&&%20?brandenburgLong%20-%20?long%20<=%200.05%20&& lang(?label)%20=%20"en" ). }%20LIMIT%2020 ==Brandenburg Gate

5 Provenance Meta-Data

In addition to the triples provided by the N-Triples datasets, the N-Quads datasets include a provenance URI to each statement. The provenance URI denotes the origin of the extracted triple in Wikipedia.

The provenance URI is composed of the URI of the article from Wikipedia where the statement has been extracted and a number of parameters denoting the exact source line.
The following parameters are set:

  • absolute-line: The (absolute) line in the Wikipedia article source. The first line of a source has the line number 1.
  • relative-line: The line in the Wikipedia article source in respect of the current section.
  • section: The section inside the article

The source of the given statement can be found in the 23th line. It is located in the first line of the section "E23".

6 Localized Datasets

The localized datasets contain the complete DBpedia data from non-English Wikipedias. Until DBpedia release 3.6, we extracted data from non-English Wikipedia pages only if there exists an equivalent English page, as we wanted to have a single URI to identify a resource across all 97 languages. However, since there are many pages in the non-English Wikipedia editions that do not have an equivalent English page (especially small towns in different countries, e.g. the Austrian village Endach, or legal and administrative terms that are just relevant for a single country) relying on English URIs only had the negative effect that DBpedia did not contain data for these entities and many DBpedia users have complained about this shortcoming.

Since the DBpedia 3.7 release, we provide localized DBpedia editions for download that contain data from all Wikipedia pages in a specific language. In DBpedia 3.7, these localized editions covered the following 15 languages: ca, de, el, es, fr, ga, hr, hu, it, nl, pl, pt, ru, sl, tr. Starting with DBpedia 3.8, we provide localized DBpedia editions for all languages.

The IRIs identifying entities in these internationalized datasets are constructed directly from the non-English title and a language-specific URI namespaces (e.g.ベルリン), so there are now many different URIs in DBpedia that refer to Berlin.

We also extract the inter-language links from the different Wikipedia editions. Thus, whenever bijective inter-language links between a non-English Wikipedia page and its English equivalent exists, the resulting owl:sameAs link can be used to relate the localized DBpedia URI to the equivalent in the main (English) DBpedia edition. The localized DBpedia editions are provided for download on the DBpedia download page.

Note that not all localized editions provide public SPARQL endpoints, nor do all localized URIs dereference. This might change in the future, as more local DBpedia chapters are set up in different countries as part of the DBpedia internationalization effort.

All DBpedia IRIs/URIs in the canonicalized datasets use the generic namespace For backwards compatibility, the N-Triples files (.nt, .nq) use URIs, e.g.ötes. The Turtle (.ttl) files use IRIs, e.g.ötes.

The localized datasets use DBpedia IRIs (not URIs) and language-specific namespaces, e.g.Βερολίνο.

6.1 Directory structure and file names

For the DBpedia 3.7 release, we created two separate folders on thedownload server/3.7/ for the datasets using 'English' URIs, /3.7-i18n/ for the datasets using 'local' URIs. Since the 3.8 release, we abandoned this high-level distinction and instead offer different dataset files in the download folder for each language, where the files with 'local' IRIs are named after their dataset, while the files with 'English' URIs/IRIs append _en_uris to the dataset name. Examples:

6.2 Data Set Statistics


  • Dataset Statistics provides detailed statistics about the DBpedia datasets in 22 languages.
  • Cross-Language Statistics provides statistics about the cross-language overlap of instances and property values between these lanuages.

7 iPopulator

iPopulator is a system that automatically populates infoboxes of Wikipedia articles by extracting attribute values from the article's text. The dataset contains new extracted information and complements DBpedia's attribute values.

8 Datasets for Natural Language Processing (NLP)

Each and every dataset from DBpedia is potentially useful for several tasks related to Natural Language Processing (NLP) and Computational Linguistics. We have described in Datasets/NLP a few examples of how to use these datasets. Moreover, we describe a number of extended datasets that were generated during the creation of DBpedia Spotlight and other NLP-related projects.

9 License

DBpedia is derived fromWikipedia and is distributed under the same licensing terms as Wikipedia itself. As Wikipedia has moved to dual-licensing, we also dual-license DBpedia starting with release 3.4.

DBpedia data from version 3.4 on is licensed under the terms of the Creative Commons Attribution-ShareAlike 3.0license and the GNU Free Documentation License. All DBpedia releases up to and including release 3.3 are licensed under the terms of the GNU Free Documentation License only.

Attribution in this case means keep DBpedia URIs visible and active through at least one (preferably all) of @href, <link />, or "Link:". If live links are impossible (e.g., when printed on paper), a textual blurb-based attribution is acceptable.

This material is Open Knowledge.