Fork of https://github.com/oxigraph/oxigraph.git for the purpose of NextGraph project
You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
 
 
 
 
 
 
Tpt 1550acdc31 Adds a basic Wikibase loader 5 years ago
.github/workflows Migrates from Travis to GitHub workflow 5 years ago
bench BSBM benchmark: plot also business intelligence use care 5 years ago
lib Adds a basic Wikibase loader 5 years ago
server Makes QueryOptions all passed to the prepare method 5 years ago
wikibase Adds a basic Wikibase loader 5 years ago
.gitignore Initial commit 7 years ago
.gitmodules Makes RDF test suite local 6 years ago
Cargo.toml Adds a basic Wikibase loader 5 years ago
LICENSE-APACHE Initial commit 7 years ago
LICENSE-MIT Initial commit 7 years ago
README.md Adds a basic Wikibase loader 5 years ago
clippy.toml Upgrades some dependencies 6 years ago

README.md

Rudf

actions status dependency status

Rudf is a work in progress graph database implementing the SPARQL standard.

There is no released version yet.

Its goal is to provide a compliant, safe and fast graph database based on the RocksDB key-value store. It is written in Rust.

It is split into multiple parts:

  • The lib directory contains the database written as a Rust library
  • The server directory contains a stand-alone binary of a web server implementing the SPARQL 1.1 Protocol.
  • The wikibase directory contains a stand-alone binary of a web server able to synchronize with a Wikibase instance.

Are currently implemented:

Run the web server

Build

You need to have a recent stable version of Rust and Cargo installed.

If it's done, executing cargo build --release in the root directory of this repository should compile the full server after having downloaded its dependencies. It will create a fat binary in target/release/rudf_server.

Usage

Run ./rudf_server to start the server. It listen by default on localhost:7878.

The server provides an HTML UI with a form to execute SPARQL requests.

It provides the following routes:

  • / allows to POST data to the server. For example curl -f -X POST -H 'Content-Type:application/n-triples' --data-binary "@MY_FILE.nt" http://localhost:7878/ will add the N-Triples file MY_FILE.nt to the server repository. Turtle, TriG, N-Triples, N-Quads and RDF XML are supported.
  • /query allows to evaluate SPARQL queries against the server repository following the SPARQL 1.1 Protocol. For example curl -f -X POST -H 'Content-Type:application/sparql-query' --data 'SELECT * WHERE { ?s ?p ?o } LIMIT 10' http://localhost:7878/query. This route supports content negotiation and could return Turtle, N-Triples, RDF XML, SPARQL Query Results XML Format and SPARQL Query Results JSON Format.

Use rudf_server --help to see the possible options when starting the server.

Run the web server for Wikibase

Build

You need to have a recent stable version of Rust and Cargo installed.

If it's done, executing cargo build --release in the root directory of this repository should compile the full server after having downloaded its dependencies. It will create a fat binary in target/release/rudf_wikibase.

Usage

To start a server that is synchronized with test.wikidata.org you should run:

./rudf_wikibase --mediawiki_api=https://test.wikidata.org/w/api.php --mediawiki_base_url=https://test.wikidata.org/wiki/ --namespaces=0,120 --file=test.wikidata

It creates a SPARQL endpoint listening to localhost:7878/query that could be queried just like Blazegraph.

The configuration parameters are:

  • mediawiki_api URL of the MediaWiki API to use
  • mediawiki_base_url Base URL of MediaWiki pages like https://test.wikidata.org/wiki/ for test.wikidata.org or http://localhost/w/index.php?title= for "vanilla" installations.
  • namespaces The ids of the Wikibase namespaces to synchronize with, separated by ,.
  • file Path of where Rudf should store its data.

License

This project is licensed under either of

at your option.

Contribution

Unless you explicitly state otherwise, any contribution intentionally submitted for inclusion in Futures by you, as defined in the Apache-2.0 license, shall be dual licensed as above, without any additional terms or conditions.