|
|
4 years ago | |
|---|---|---|
| .. | ||
| src | 4 years ago | |
| templates | 4 years ago | |
| Cargo.toml | 4 years ago | |
| Dockerfile | 4 years ago | |
| README.md | 4 years ago | |
| logo.svg | 5 years ago | |
README.md
Oxigraph Server
Oxigraph Server is a standalone HTTP server providing a graph database implementing the SPARQL standard.
Its goal is to provide a compliant, safe, and fast graph database based on the RocksDB key-value store. It is written in Rust. It also provides a set of utility functions for reading, writing, and processing RDF files.
Oxigraph is in heavy development and SPARQL query evaluation has not been optimized yet.
Oxigraph provides three different installation methods for Oxigraph server.
cargo install(multiplatform)- A Docker image
- A Homebrew formula
It is also usable as a Rust library and as a Python library.
Oxigraph implements the following specifications:
- SPARQL 1.1 Query, SPARQL 1.1 Update, and SPARQL 1.1 Federated Query.
- Turtle, TriG, N-Triples, N-Quads, and RDF XML RDF serialization formats for both data ingestion and retrieval using the Rio library.
- SPARQL Query Results XML Format, SPARQL 1.1 Query Results JSON Format and SPARQL 1.1 Query Results CSV and TSV Formats.
- SPARQL 1.1 Protocol and SPARQL 1.1 Graph Store HTTP Protocol.
A preliminary benchmark is provided.
Installation
You need to have a recent stable version of Rust and Cargo installed.
To download, build and install the latest released version run cargo install oxigraph_server.
There is no need to clone the git repository.
To compile the server from source, clone this git repository, and execute cargo build --release in the server directory to compile the full server after having downloaded its dependencies.
It will create a fat binary in target/release/oxigraph_server.
Usage
Run oxigraph_server --location my_data_storage_directory serve to start the server where my_data_storage_directory is the directory where you want Oxigraph data to be stored in. It listens by default on localhost:7878.
The server provides an HTML UI, based on YASGUI, with a form to execute SPARQL requests.
It provides the following REST actions:
/queryallows to evaluate SPARQL queries against the server repository following the SPARQL 1.1 Protocol. For example:
This action supports content negotiation and could return Turtle, N-Triples, RDF XML, SPARQL Query Results XML Format and SPARQL Query Results JSON Format.curl -X POST -H 'Content-Type:application/sparql-query' \ --data 'SELECT * WHERE { ?s ?p ?o } LIMIT 10' http://localhost:7878/query/updateallows to execute SPARQL updates against the server repository following the SPARQL 1.1 Protocol. For example:curl -X POST -H 'Content-Type: application/sparql-update' \ --data 'DELETE WHERE { <http://example.com/s> ?p ?o }' http://localhost:7878/update/storeallows to retrieve and change the server content using the SPARQL 1.1 Graph Store HTTP Protocol. For example:
will add the N-Triples filecurl -f -X POST -H 'Content-Type:application/n-triples' \ --data-binary "@MY_FILE.nt" "http://localhost:7878/store?graph=http://example.com/g"MY_FILE.ntto the server dataset inside of thehttp://example.com/gnamed graph. Turtle, N-Triples and RDF XML are supported. It is also possible toPOST,PUTandGETthe complete RDF dataset on the server using RDF dataset formats (TriG and N-Quads) against the/storeendpoint. For example:
will add the N-Quads filecurl -f -X POST -H 'Content-Type:application/n-quads' \ --data-binary "@MY_FILE.nq" http://localhost:7878/storeMY_FILE.nqto the server dataset.
Use oxigraph_server --help to see the possible options when starting the server.
It is also possible to load RDF data offline using bulk loading:
oxigraph_server --location my_data_storage_directory load --file my_file.nq
Using a Docker image
Display the help menu
docker run --rm oxigraph/oxigraph --help
Run the Web server
Expose the server on port 7878 of the host machine, and save data on the local ./data folder
docker run --rm -v $PWD/data:/data -p 7878:7878 oxigraph/oxigraph --location /data serve --bind 0.0.0.0:7878
You can then access it from your machine on port 7878:
# Open the GUI in a browser
firefox http://localhost:7878
# Post some data
curl http://localhost:7878/store?default -H 'Content-Type: text/turtle' -d@./data.ttl
# Make a query
curl -X POST -H 'Accept: application/sparql-results+json' -H 'Content-Type: application/sparql-query' --data 'SELECT * WHERE { ?s ?p ?o } LIMIT 10' http://localhost:7878/query
# Make an UPDATE
curl -X POST -H 'Content-Type: application/sparql-update' --data 'DELETE WHERE { <http://example.com/s> ?p ?o }' http://localhost:7878/update
Run the Web server with basic authentication
It can be useful to make Oxigraph SPARQL endpoint available publicly, with a layer of authentication on /update to be able to add data.
You can do so by using a nginx basic authentication in an additional docker container with docker-compose. First create a nginx.conf file:
daemon off;
events {
worker_connections 1024;
}
http {
server {
server_name localhost;
listen 7878;
rewrite ^/(.*) /$1 break;
proxy_ignore_client_abort on;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header Access-Control-Allow-Origin "*";
location ~ ^(/|/query)$ {
proxy_pass http://oxigraph:7878;
proxy_pass_request_headers on;
}
location ~ ^(/update|/store)$ {
auth_basic "Oxigraph Administrator's Area";
auth_basic_user_file /etc/nginx/.htpasswd;
proxy_pass http://oxigraph:7878;
proxy_pass_request_headers on;
}
}
}
Then a docker-compose.yml in the same folder, you can change the default user and password in the environment section:
version: "3"
services:
oxigraph:
image: ghcr.io/oxigraph/oxigraph:latest
## To build from local source code:
# build:
# context: .
# dockerfile: server/Dockerfile
volumes:
- ./data:/data
nginx-auth:
image: nginx:1.21.4
environment:
- OXIGRAPH_USER=oxigraph
- OXIGRAPH_PASSWORD=oxigraphy
volumes:
- ./nginx.conf:/etc/nginx/nginx.conf
## For multiple users: uncomment this line to mount a pre-generated .htpasswd
# - ./.htpasswd:/etc/nginx/.htpasswd
ports:
- 7878:7878
entrypoint: "bash -c 'echo -n $OXIGRAPH_USER: >> /etc/nginx/.htpasswd && echo $OXIGRAPH_PASSWORD | openssl passwd -stdin -apr1 >> /etc/nginx/.htpasswd && /docker-entrypoint.sh nginx'"
Once the docker-compose.yaml and nginx.conf are ready, start the Oxigraph server and nginx proxy for authentication on http://localhost:7878:
docker-compose up
Then it is possible to update the graph using basic authentication mechanisms. For example with curl: change $OXIGRAPH_USER and $OXIGRAPH_PASSWORD, or set them as environment variables, then run this command to insert a simple triple:
curl -X POST -u $OXIGRAPH_USER:$OXIGRAPH_PASSWORD -H 'Content-Type: application/sparql-update' --data 'INSERT DATA { <http://example.com/s> <http://example.com/p> <http://example.com/o> }' http://localhost:7878/update
In case you want to have multiple users, you can comment the entrypoint: line in the docker-compose.yml file, uncomment the .htpasswd volume, then generate each user in the .htpasswd file with this command:
htpasswd -Bbn $OXIGRAPH_USER $OXIGRAPH_PASSWORD >> .htpasswd
Build the image
You could easily build your own Docker image by cloning this repository with its submodules, and going to the root folder:
git clone --recursive https://github.com/oxigraph/oxigraph.git
cd oxigraph
Then run this command to build the image locally:
docker build -t oxigraph/oxigraph -f server/Dockerfile .
Homebrew
Oxigraph maintains a Homebrew formula in a custom tap.
To install Oxigraph server using Homebrew do:
brew tap oxigraph/oxigraph
brew install oxigraph
It installs the oxigraph_server binary. See the usage documentation to know how to use it.
Migration guide
From 0.2 to 0.3
- The cli API have been completely rewritten. To start the server run
oxigraph_server serve --location MY_STORAGEinstead ofoxigraph_server --file MY_STORAGE. - Fast data bulk loading is not supported using
oxigraph_server load --location MY_STORAGE --file MY_FILE. The file format is guessed from the extension (.nt,.ttl,.nq...). - RDF-star is now implemented.
- All operations are now transactional using the "repeatable read" isolation level: the store only exposes changes that have been "committed" (i.e. no partial writes) and the exposed state does not change for the complete duration of a read operation (e.g. a SPARQL query) or a read/write operation (e.g. a SPARQL update).
License
This project is licensed under either of
- Apache License, Version 2.0, (LICENSE-APACHE or http://www.apache.org/licenses/LICENSE-2.0)
- MIT license (LICENSE-MIT or http://opensource.org/licenses/MIT)
at your option.
Contribution
Unless you explicitly state otherwise, any contribution intentionally submitted for inclusion in Futures by you, as defined in the Apache-2.0 license, shall be dual licensed as above, without any additional terms or conditions.