Tag

semantic-web

4 datasets
  • Freebase Wikipedia Extraction (WEX)

    Offsite — The Freebase Wikipedia Extraction (WEX) is a processed dump of the English language Wikipedia. The wiki markup for each article is transformed into machine-readable XML, and common relational features such as templates, infoboxes, categories, article sections, and redirects are extracted intabular form. Freebase WEX is provided as a set of database tables in TSV format ...
  • DBPedia Main

    Offsite — DBpedia is a community effort to extract structured information from Wikipedia and to make this information available on the Web. The DBpedia knowledge base currently describes more than 2.6 million things, including at least 213,000 persons, 328,000 places, 57,000 music albums, 36,000 films, 20,000 companies. The knowledge base consists of 274 million pieces of ...
  • Amsterdam Museum Data Set (RDF)

    Offsite — The Amsterdam Museum dataset describes more than 70,000 cultural heritage objects related to the city of Amsterdam described by the museum. The metadata was retrieved from an XML Web API of the museum’s Adlib collection database and converted to RDF compliant with the Europeana Data Model (EDM). This makes the Amsterdam Museum data the first of its kind to be officially ...
  • Freebase Data Dump

    Offsite — Freebase data dumps provide all of the current facts and assertions within the Freebase system. The data dumps are complete, general-purpose extracts of the Freebase data in a variety of formats. Freebase releases a fresh data dump every three months. Freebase is an open database of the world’s information, covering millions of topics across hundreds of categories. ...