@ -14,7 +14,7 @@ It also provides a set of utility functions for reading, writing, and processing
Oxigraph is in heavy development and SPARQL query evaluation has not been optimized yet.
Oxigraph also provides [a standalone HTTP server](https://crates.io/crates/oxigraph-cli) and [a Python library](https://pyoxigraph.readthedocs.io/) based on this library.
Oxigraph also provides [a CLI tool](https://crates.io/crates/oxigraph-cli) and [a Python library](https://pyoxigraph.readthedocs.io/) based on this library.
Oxigraph implements the following specifications:
@ -48,8 +48,8 @@ if let QueryResults::Solutions(mut solutions) = store.query("SELECT ?s WHERE {
```
Some parts of this library are available as standalone crates:
* [`oxrdf`](https://crates.io/crates/oxrdf), datastructures encoding RDF basic concepts (the `oxigraph::model` module).
* [`oxrdfio`](https://crates.io/crates/oxrdfio), a unified parser and serializer API for RDF formats. It itself relies on:
* [`oxrdf`](https://crates.io/crates/oxrdf), datastructures encoding RDF basic concepts (the [`oxigraph::model`](crate::model) module).
* [`oxrdfio`](https://crates.io/crates/oxrdfio), a unified parser and serializer API for RDF formats (the [`oxigraph::io`](crate::io) module). It itself relies on:
* [`oxttl`](https://crates.io/crates/oxttl), N-Triple, N-Quad, Turtle, TriG and N3 parsing and serialization.
* [`oxrdfxml`](https://crates.io/crates/oxrdfxml), RDF/XML parsing and serialization.
* [`spargebra`](https://crates.io/crates/spargebra), a SPARQL parser.
@ -57,7 +57,7 @@ Some parts of this library are available as standalone crates:
* [`sparopt`](https://crates.io/crates/sparesults), a SPARQL optimizer.
* [`oxsdatatypes`](https://crates.io/crates/oxsdatatypes), an implementation of some XML Schema datatypes.
To build the library, don't forget to clone the submodules using `git clone --recursive https://github.com/oxigraph/oxigraph.git` to clone the repository including submodules or `git submodule update --init` to add submodules to the already cloned repository.
To build the library locally, don't forget to clone the submodules using `git clone --recursive https://github.com/oxigraph/oxigraph.git` to clone the repository including submodules or `git submodule update --init` to add submodules to the already cloned repository.
/// - [`with_base_iri`](RdfParser::with_base_iri) to resolve the relative IRIs.
/// - [`rename_blank_nodes`](RdfParser::rename_blank_nodes) to rename the blank nodes to auto-generated numbers to avoid conflicts when merging RDF graphs together.
/// - [`without_named_graphs`](RdfParser::without_named_graphs) to parse a single graph.
/// - [`with_base_iri`](Self::with_base_iri) to resolve the relative IRIs.
/// - [`rename_blank_nodes`](Self::rename_blank_nodes) to rename the blank nodes to auto-generated numbers to avoid conflicts when merging RDF graphs together.
/// - [`without_named_graphs`](Self::without_named_graphs) to parse a single graph.
/// - [`unchecked`](Self::unchecked) to skip some validations if the file is already known to be valid.
/// <div class="warning">Do not forget to run the [`finish`](ToWriteQuadWriter::finish()) method to properly write the last bytes of the file.</div>
/// <div class="warning">
///
/// <div class="warning">This writer does unbuffered writes. You might want to use [`BufWriter`](io::BufWriter) to avoid that.</div>
/// Do not forget to run the [`finish`](ToWriteQuadWriter::finish()) method to properly write the last bytes of the file.</div>
///
/// <div class="warning">
///
/// This writer does unbuffered writes. You might want to use [`BufWriter`](io::BufWriter) to avoid that.</div>
///
/// ```
/// use oxrdfio::{RdfFormat, RdfSerializer};
@ -118,9 +122,13 @@ impl RdfSerializer {
/// Writes to a Tokio [`AsyncWrite`] implementation.
///
/// <div class="warning">Do not forget to run the [`finish`](ToTokioAsyncWriteQuadWriter::finish()) method to properly write the last bytes of the file.</div>
/// <div class="warning">
///
/// Do not forget to run the [`finish`](ToTokioAsyncWriteQuadWriter::finish()) method to properly write the last bytes of the file.</div>
///
/// <div class="warning">
///
/// <div class="warning">This writer does unbuffered writes. You might want to use [`BufWriter`](tokio::io::BufWriter) to avoid that.</div>
/// This writer does unbuffered writes. You might want to use [`BufWriter`](tokio::io::BufWriter) to avoid that.</div>
///
/// ```
/// use oxrdfio::{RdfFormat, RdfSerializer};
@ -179,9 +187,13 @@ impl From<RdfFormat> for RdfSerializer {
///
/// Can be built using [`RdfSerializer::serialize_to_write`].
///
/// <div class="warning">Do not forget to run the [`finish`](ToWriteQuadWriter::finish()) method to properly write the last bytes of the file.</div>
/// <div class="warning">
///
/// <div class="warning">This writer does unbuffered writes. You might want to use [`BufWriter`](io::BufWriter) to avoid that.</div>
/// Do not forget to run the [`finish`](ToWriteQuadWriter::finish()) method to properly write the last bytes of the file.</div>
///
/// <div class="warning">
///
/// This writer does unbuffered writes. You might want to use [`BufWriter`](io::BufWriter) to avoid that.</div>
/// Returns a `SolutionsWriter` allowing writing query solutions into the given [`Write`] implementation.
///
/// <div class="warning">Do not forget to run the [`finish`](ToWriteSolutionsWriter::finish()) method to properly write the last bytes of the file.</div>
/// <div class="warning">
///
/// <div class="warning">This writer does unbuffered writes. You might want to use [`BufWriter`](io::BufWriter) to avoid that.</div>
/// Do not forget to run the [`finish`](ToWriteSolutionsWriter::finish()) method to properly write the last bytes of the file.</div>
///
/// <div class="warning">
///
/// This writer does unbuffered writes. You might want to use [`BufWriter`](io::BufWriter) to avoid that.</div>
///
/// Example in XML (the API is the same for JSON, CSV and TSV):
/// ```
@ -158,9 +162,13 @@ impl QueryResultsSerializer {
/// Returns a `SolutionsWriter` allowing writing query solutions into the given [`Write`] implementation.
///
/// <div class="warning">Do not forget to run the [`finish`](ToWriteSolutionsWriter::finish()) method to properly write the last bytes of the file.</div>
/// <div class="warning">
///
/// Do not forget to run the [`finish`](ToWriteSolutionsWriter::finish()) method to properly write the last bytes of the file.</div>
///
/// <div class="warning">
///
/// <div class="warning">This writer does unbuffered writes. You might want to use [`BufWriter`](io::BufWriter) to avoid that.</div>
/// This writer does unbuffered writes. You might want to use [`BufWriter`](io::BufWriter) to avoid that.</div>
///
/// Example in XML (the API is the same for JSON, CSV and TSV):
/// ```
@ -223,9 +231,13 @@ impl From<QueryResultsFormat> for QueryResultsSerializer {
///
/// Could be built using a [`QueryResultsSerializer`].
///
/// <div class="warning">Do not forget to run the [`finish`](ToWriteSolutionsWriter::finish()) method to properly write the last bytes of the file.</div>
/// <div class="warning">
///
/// <div class="warning">This writer does unbuffered writes. You might want to use [`BufWriter`](io::BufWriter) to avoid that.</div>
/// Do not forget to run the [`finish`](ToWriteSolutionsWriter::finish()) method to properly write the last bytes of the file.</div>
///
/// <div class="warning">
///
/// This writer does unbuffered writes. You might want to use [`BufWriter`](io::BufWriter) to avoid that.</div>
///
/// Example in TSV (the API is the same for JSON, XML and CSV):
/// Could be built using a [`QueryResultsSerializer`].
///
/// <div class="warning">Do not forget to run the [`finish`](ToTokioAsyncWriteSolutionsWriter::finish()) method to properly write the last bytes of the file.</div>
/// <div class="warning">
///
/// Do not forget to run the [`finish`](ToTokioAsyncWriteSolutionsWriter::finish()) method to properly write the last bytes of the file.</div>
///
/// <div class="warning">
///
/// <div class="warning">This writer does unbuffered writes. You might want to use [`BufWriter`](tokio::io::BufWriter) to avoid that.</div>
/// This writer does unbuffered writes. You might want to use [`BufWriter`](tokio::io::BufWriter) to avoid that.</div>
///
/// Example in TSV (the API is the same for JSON, CSV and XML):
//! API to access an on-disk [RDF dataset](https://www.w3.org/TR/rdf11-concepts/#dfn-rdf-dataset).
//!
//! The entry point of the module is the [`Store`] struct.
//!
//! Usage example:
//! ```
//! use oxigraph::store::Store;
@ -605,7 +607,9 @@ impl Store {
/// Adds atomically a set of quads to this store.
///
/// <div class="warning">This operation uses a memory heavy transaction internally, use the [`bulk_loader`](Store::bulk_loader) if you plan to add ten of millions of triples.</div>
/// <div class="warning">
///
/// This operation uses a memory heavy transaction internally, use the [`bulk_loader`](Store::bulk_loader) if you plan to add ten of millions of triples.</div>
pubfnextend(
&self,
quads: implIntoIterator<Item=implInto<Quad>>,
@ -918,7 +922,9 @@ impl Store {
/// After its creation, the backup is usable using [`Store::open`]
/// like a regular Oxigraph database and operates independently from the original database.
///
/// <div class="warning">Backups are only possible for on-disk databases created using [`Store::open`].</div>
/// <div class="warning">
///
/// Backups are only possible for on-disk databases created using [`Store::open`].</div>
/// Temporary in-memory databases created using [`Store::new`] are not compatible with RocksDB backup system.
///
/// <div class="warning">An error is raised if the `target_directory` already exists.</div>
@ -1497,13 +1503,15 @@ impl Iterator for GraphNameIter {
/// A bulk loader allowing to load at lot of data quickly into the store.
///
/// <div class="warning">The operations provided here are not atomic.</div>
/// <div class="warning">The operations provided here are not atomic.
/// If the operation fails in the middle, only a part of the data may be written to the store.
/// Results might get weird if you delete data during the loading process.
/// Results might get weird if you delete data during the loading process.</div>
///
/// <div class="warning">It is optimized for speed.</div>
/// Memory usage is configurable using [`BulkLoader::with_max_memory_size_in_megabytes`]
/// and the number of used threads with [`BulkLoader::with_num_threads`].
/// <div class="warning">
///
/// It is optimized for speed.</div>
/// Memory usage is configurable using [`with_max_memory_size_in_megabytes`](Self::with_max_memory_size_in_megabytes)
/// and the number of used threads with [`with_num_threads`](Self::with_num_threads).
/// By default the memory consumption target (excluding the system and RocksDB internal consumption)
/// is around 2GB per thread and 2 threads.
/// These targets are considered per loaded file.
@ -1598,7 +1606,9 @@ impl BulkLoader {
/// If the parsing fails in the middle of the file, only a part of it may be written to the store.
/// Results might get weird if you delete data during the loading process.</div>
///
/// <div class="warning">This method is optimized for speed. See [the struct](BulkLoader) documentation for more details.</div>
/// <div class="warning">
///
/// This method is optimized for speed. See [the struct](Self) documentation for more details.</div>
///
/// To get better speed on valid datasets, consider enabling [`RdfParser::unchecked`] option to skip some validations.
///
@ -1668,7 +1678,9 @@ impl BulkLoader {
/// If the parsing fails in the middle of the file, only a part of it may be written to the store.
/// Results might get weird if you delete data during the loading process.</div>
///
/// <div class="warning">This method is optimized for speed. See [the struct](BulkLoader) documentation for more details.</div>
/// <div class="warning">
///
/// This method is optimized for speed. See [the struct](Self) documentation for more details.</div>
///
/// Usage example:
/// ```
@ -1727,7 +1739,9 @@ impl BulkLoader {
/// If the parsing fails in the middle of the file, only a part of it may be written to the store.
/// Results might get weird if you delete data during the loading process.</div>
///
/// <div class="warning">This method is optimized for speed. See [the struct](BulkLoader) documentation for more details.</div>
/// <div class="warning">
///
/// This method is optimized for speed. See [the struct](Self) documentation for more details.</div>
///
/// Usage example:
/// ```
@ -1788,7 +1802,9 @@ impl BulkLoader {
/// If the process fails in the middle of the file, only a part of the data may be written to the store.
/// Results might get weird if you delete data during the loading process.</div>
///
/// <div class="warning">This method is optimized for speed. See [the struct](BulkLoader) documentation for more details.</div>
/// <div class="warning">
///
/// This method is optimized for speed. See [the struct](Self) documentation for more details.</div>
pubfnload_quads(
&self,
quads: implIntoIterator<Item=implInto<Quad>>,
@ -1802,7 +1818,9 @@ impl BulkLoader {
/// If the process fails in the middle of the file, only a part of the data may be written to the store.
/// Results might get weird if you delete data during the loading process.</div>
///
/// <div class="warning">This method is optimized for speed. See [the struct](BulkLoader) documentation for more details.</div>
/// <div class="warning">
///
/// This method is optimized for speed. See [the struct](Self) documentation for more details.</div>
* Python 3.7 and ``musllinux_1_1`` support have been removed.
* :py:class:`OSError` is now raised instead of :py:class:`IOError` on OS errors.
* The ``mime_type`` parameter have been renamed to ``format`` in I/O functions.
Using :py:class:`RdfFormat` is recommended to describe formats.
* Boolean SPARQL results are now encoded with the :py:class:`QueryBoolean` class and not a simple :py:class:`bool`.
* A `path` parameter has been added to all I/O method to read from a file.
The existing ``input`` parameter now consider :py:class:`str` values to be a serialization to parse.
For example, ``parse(path="foo.ttl")`` will parse the file ``foo.ttl`` whereas ``parse("foo", format=RdfFormat.N_TRIPLES)`` will parse a N-Triples file which content is ``foo``.