master
Niko PLP 4 months ago
parent b914a9a6d1
commit 8fd1f1a720
  1. 8
      src/config.ts
  2. 4
      src/pages/en/activitypub.md
  3. 2
      src/pages/en/documents.md
  4. 18
      src/pages/en/ecosystem.md
  5. 239
      src/pages/en/framework/nuri.md
  6. 40
      src/pages/en/framework/permissions.md
  7. 4
      src/pages/en/framework/roadmap.md
  8. 78
      src/pages/en/framework/schema.md
  9. 6
      src/pages/en/framework/semantic.md
  10. 2
      src/pages/en/framework/services.md
  11. 38
      src/pages/en/framework/signature.md
  12. 2
      src/pages/en/framework/smart-contract.md
  13. 44
      src/pages/en/framework/transactions.md
  14. 4
      src/pages/en/solid.md
  15. 93
      src/pages/en/verifier.md

@ -60,14 +60,14 @@ export const SIDEBAR: Sidebar = {
{ text: "Accessibility", link: "en/accessibility" },
{ text: "Roadmap", link: "en/roadmap" },
],
"Framework": [
Framework: [
{ text: "Introduction", link: "en/framework" },
{ text: "Getting started", link: "en/framework/getting-started" },
{ text: "Data-First", link: "en/framework/data-first"},
{ text: "Data-First", link: "en/framework/data-first" },
{ text: "CRDTs", link: "en/framework/crdts" },
{ text: "Semantic Web", link: "en/framework/semantic" },
{ text: "Schema", link: "en/framework/schema" },
{ text: "DID & NURI", link: "en/framework/nuri" },
{ text: "DID & Nuri", link: "en/framework/nuri" },
{ text: "Transactions", link: "en/framework/transactions" },
{ text: "Permissions", link: "en/framework/permissions" },
{ text: "Signature", link: "en/framework/signature" },
@ -88,7 +88,7 @@ export const SIDEBAR: Sidebar = {
{ text: "Security Audit", link: "en/audit" },
{ text: "Survey", link: "en/survey" },
],
"Reference": [
Reference: [
{ text: "NodeJS SDK", link: "en/nodejs" },
{ text: "Web JS SDK", link: "en/web" },
{ text: "Rust SDK", link: "en/rust" },

@ -4,4 +4,6 @@ description: Interact with ActivityPub social networks from NextGraph, thanks to
layout: ../../layouts/MainLayout.astro
---
Please bare with us as we are currently writing/publishing the documentation (23 of August until 28 of August 2024).
Thanks to our collaboration with the [ActivityPods project](https://activitypods.org) we will soon offer full compatibility with [ActivityPub](https://activitypub.rocks/) network. At the same time, all apps developed on ActivityPods framework will be compatible with NextGraph.
Stay tuned!

@ -40,7 +40,7 @@ Each Document is identified with a **unique identifier**, that is a public key r
It is a permanent ID of the form `did:ng:o:[44 chars of the ID]` by example `did:ng:o:EghEnCqhpzp4Z7KXdbTx0LkQ1dUaaqwC0DGVS-0BAKAA`.
This Identifier is a valid URI that can be used to link a document with another (both in the rich-text or in the RDF data, as subject, object or even predicate if a document is used to define an ontology). We call it **NURI**, for NextGraph URI.
This Identifier is a valid URI that can be used to link a document with another (both in the rich-text or in the RDF data, as subject, object or even predicate if a document is used to define an ontology). We call it **Nuri**, for NextGraph URI.
It follows the **DID** (Decentralized Identifier) format, and an adhoc DID Method will be specified and registered shortly.

@ -4,4 +4,20 @@ description: Apps developed with NextGraph integrate its ecosystem
layout: ../../layouts/MainLayout.astro
---
Please bare with us as we are currently writing/publishing the documentation (23 of August until 28 of August 2024).
NextGraph ecosystem is composed of
- Several Apps that you can download and install for your OS (linux, macOS, windows, android and soon iOS), and a webapp available at [nextgraph.eu](https://nextgraph.eu)
- A Platform : Within those apps, the user can already enjoy several [features](/en/features) that come by default, including a social network.
- A Framework for App developers, with SDK for nodeJS/Deno, web apps in Svelte and React, an SDK for Rust, and soon for Tauri apps too.
- An App store integrated inside our official apps, so that App developers can easily reach their audience, and users can easily install those apps. Those apps will run inside iframe inside our official apps.
- App developers can also build standalone apps with our framework, without being embedded inside our official apps. They can be shipped separately, and have total control on their GUIs. In this case, they will still need to integrate with our Capabilities APIs, in order to request permissions for using the users' data. More on that soon.
- An Open Network that anybody can join, by running their own self-hosted broker.
All the pieces of this ecosystem are being built at the moment. We are currently focusing our efforts on readying the Framework for App developers. Stay tuned!
You can already try our [apps here](/en/getting-started).

@ -1,7 +1,242 @@
---
title: DID & NURI
title: DID & Nuri
description: NextGraph URI scheme and the DID method
layout: ../../../layouts/MainLayout.astro
---
Please bare with us as we are currently writing/publishing the documentation (23 of August until 28 of August 2024).
Before you read more about Nuri and DID, we suggest you to have a look at the [Verifier](/en/verifier) documentation, in order to understand key concepts that are used here.
A document has a URI, that we call Nuri (NextGraph URI) and that uses the DID scheme.
The DID method `did:ng` is pending registration with W3C DID Working Group, with a specification about how a DID resolver can apply the CRUD operations to create, read, update, and deactivate a DID document `did:ng`. Not to confuse a NextGraph Document with a DID document. A DID document means the cryptographic materials.
Here we will detail the DID method and its format, and how it relates to RDF.
Most of the identifiers used in this scheme are of the PubKey, SymKey or Digest types, which are all 32 bytes arrays. They are all preceded by a byte detailing the version number of the format, which is zero at the moment.
The byte array is reversed, and for this reason all the IDs end with the A letter. The arrays are encoded into base64_url string and always have 44 characters. Here is an example of an ID : `JQ5gCLoX_jalC9diTDCvx-Wu5ZQUcYWEE821nhVRMcEA`
We will use here the placeholder `[XXX]` to denote one of those IDs in the format, and explain which part it is for.
Those IDs and Keys will be combined in different ways detailed below, in order to form different types of capabilities and links.
#### capabilities
Some special format that need explanation :
- peers : a serialisation of a vector of BrokerServer that each contain the IP and port and the PeerID of the broker (and a version number). this is the “locator” part of the URI which tells which brokers should be contacted to join the overlay.
- ReadCap : an ObjectRef (id+key) towards the current Branch definition commit
- OverlayLink: In case of an Inner Overlay, a ReadCap to the current Branch definition commit of the overlay branch of the store.
- ReadToken : is a hash of a ReadCap, it is mostly used in ExtRequests and SPARQL requests from untrusted apps or federated SPARQL requests. it gives the same level of permission than the hashed ReadCap, without revealing its content.
- PermaCap: a durable capability ( for read, write or query)
We omit here the prefix did:ng as it is common to all the schemes.
| did:ng capabilities | |
| ---------------------------------------- | ------------------------------------------------------------------------------------------------------- |
| | |
| Profile | `:a/b/g:[profileid]` (a storeid) |
| A profile's Inbox | `:a/b/g:[profileid]:p:[inboxInfo]:l:[peers]` |
| A Document's inbox | `:o:[repoid]:p:[inboxInfo]:l:[peers]` |
| fully qualified read-only Document Nuri | `:o:[repoid]:r:[readcap]:v:[overlayID]:l:[peers]` |
| fully qualified read-write Document Nuri | `:o:[repoid]:r:[readcap]:w:[overlayLink]:l:[peers]` |
| document accessed by Token | `:o:[repoid]:n:[readtoken]:v:[overlayID]:l:[peers]` |
| PermaCap | `:o:[repoid]:s:[permacap]:v:[overlayID]:l:[peers]` |
| specific commit | `:o:[repoid]:c:[commitid]:k:[commitkey]:h:[topicid]:v:[overlayID]:l:[peers]` |
| head with 2 commits | `:o:[repoid]:c:[commitid]:k:[commitkey]:c:[commitid]:k:[commitkey]:h:[topicid]:v:[overlayID]:l:[peers]` |
| named commit or branch | `:o:[repoid]:a:[name]:r:[repo_readcap]:v:[overlayID]:l:[peers]` |
| branchID | `:o:[repoid]:b:[branchId]:r:[branch_readcap]:v:[overlayID]:l:[peers] ` |
| binary file | `:j:[objectid]:k:[key]:v:[overlayID]:l:[peers]` |
overlay and peers can be omitted if the capability is about a repo/object that belongs to the same store.
#### Nuri stored in triplestore
When those capabilities are stored inside an RDF document, they are decomposed into several parts, and each part is inserted as a separate triple. Here more details :
| part | corresponding triple(s) |
| --------------------------------- | --------------------------------------------------------------- |
| | |
| `:a/b/g/o:[repoid]:p:[inboxInfo]` | `<did:ng:a/b/g/o:[repoid]> <ng:inbox> <did:ng:p:[inboxInfo]>` |
| `:p:[inboxInfo]:l:[peers]` | `<did:ng:v:[overlayid]> <ng:locator> <did:ng:l:[peers]>` |
| `:o:[repoid]:r:[readcap]` | `<did:ng:o:[repoid]> <ng:access> <did:ng:r:[readcap]>` |
| (if readcap is a branch) | `<did:ng:o:[repoid]> <ng:revision> <did:ng:b:[branchid]> ` |
| (if readcap is a branch) | `<did:ng:b:[branchid]> <ng:access> <did:ng:r:[branch_readcap]>` |
| `:o:[repoid]:v:[overlayid]` | `<did:ng:o:[repoid]> <ng:overlay> <did:ng:v:[overlayid]>` |
| `:o:[repoid]:w:[overlaylink]` | `<did:ng:v:[overlayid]> <ng:inner> <did:ng:w:[overlaylink]>` |
| | `<did:ng:v:[overlayid]> <ng:locator> <did:ng:l:[peers]>` |
| `:o:[repoid]:n:[readtoken]` | `<did:ng:o:[repoid]> <ng:access> <did:ng:n:[readtoken]>` |
| `:o:[repoid]:s:[permacap]` | `<did:ng:o:[repoid]> <ng:access> <did:ng:s:[permacap]>` |
| `:o:[repoid]:c:[commitid]…` | `<did:ng:o:[repoid]> <ng:revision> <did:ng:c:[]:c:[]>` |
| | `<did:ng:c:[commitid]> <ng:access> <k:[key]:h:[topicid]>` |
| `:o:[repoid]:a:[name]` | `<did:ng:o:[repoid]> <ng:revision> <did:ng:a:[name]>` |
| `:o:[repoid]:b:[branchid]` | `<did:ng:o:[repoid]> <ng:revision> <did:ng:b:[branchid]>` |
| | `<did:ng:b:[branchid]> <ng:access> <did:ng:r:[branch_readcap]>` |
| `:j:[objectid]:k:[key]` | `<did:ng:j:[objectid]> <ng:access> <did:ng:k:[key]:h:[topic]>` |
| | `<did:ng:j:[objectid]> <ng:overlay> <did:ng:v:[overlayid]>` |
| if it is an attachment | `<did:ng:o:[repoid]> <ng:attachment> <did:ng:j:[objectid]>` |
When a capability is added to an RDF document, 2 additional triples can indicate a special purpose for this capability:
- `<did:ng:o:[repoid]> <ng:includes> <did:ng:[o/a/b/c]>` means that the `<object>` should be included, if needed it should be fetched and kept in “cache” in the User Storage, and when a query is made on the Document, that `<object>` should be included in the target graphs of the query.
- `<did:ng:o:[repoid]> <ng:subscribe> <did:ng:[o/a/b]>` if included, and the ng:subscribe predicate is present, the topic will be subscribed to and the branch will be kept in sync.
Otherwise, without those triples, the capability is here because the repo, branch, commit or object is referenced somewhere in the RDF (as subject or object) or in the discrete doc (a link or embedded media).
The decomposition into several triples enables to deduplicate some information (if there are several links to the same overlay, the ng:access and ng:locator triple will be the same, so it will deduplicate) and also enables that the URI used to reference the foreign resource, will only contain the RepoId or ObjectId, which is important, specially for RepoId, as with RDF we want to establish facts between resources, that can be traversed with SPARQL queries. The unique identifier of a resource is `<did:ng:o:[repoid]>` and not the full DID capability. No matter where the resource is actually stored (the locator and overlay parts of the DID capability), which access we possess for that resource (read or write) or which revision we want to use at the moment, the unique identifier of the resource should remain the same. This is why we never use the full DID capability as URI for subject or object in RDF. Instead we use the minimal form `<did:ng:o:[repoid]>` .
We should probably mention here also that NextGraph is fully compatible with `<http(s)://…>` URIs and that they can coexist in any Document. `<ng:includes>` might work for http resources that can be dereferenced online with a GET, but we are not going to implement that right now (not a priority) and `<ng:subscribe>` will never work for those URLs, but we thought of using `<ng:refresh> "3600“ ` and `<ng:cache> “3600"` instead, which would refresh periodically the dereferenced resource in the former case, every hour, or would cache the resource after the first dereferencing for a period of 1h in the later case, but then would delete the cache to save space and would only dereference again if need be (before processing any query on the resource) and cache again for another hour. those 2 addition predicate are not planned for now as they are not a priority.
#### Identifiers
Here are the identifiers that can be used to refer to any resource anywhere in the graph. they are unique global identifiers. You can see that they are all (except users) suffixed with the overlayid, which reduces the risk of collision.
| Identifiers | that can be used in subject or object of any triple |
| ---------------------------------- | --------------------------------------------------- |
| | |
| `<did:ng:a/b/g:[storeid]>` | profile of a public, protected or group store |
| `<did:ng:o:[repoid]>` | a document |
| `<did:ng:o:[repoid]#fragment>` | fragment of a Document |
| `<did:ng:o:[repoid]:u:[randomid]>` | for skolemized blank nodes |
| `<did:ng:o:[repoid]:t:[commitid]>` | for commit nodes (not implemented yet) |
| `<did:ng:j:[objectid]>` | a binary file |
`did:ng:o:[repoid]` can be omitted when the identifier is understood as being within a specified named graph, or when using a BASE in queries.
#### Nuri in SPARQL
The graph part of any quad in NextGraph is reserved. The GSP (Graph Store Protocol) is not accessible by any client. Still, it is possible to use the graph part in SPARQL Query and SPARQL Update, with some limitations :
- for SPARQL Query: the named graph can only have the forms :
- `<did:ng:v:[overlayid]>` for the whole store
- `<did:ng:o:[repoid]:v:[overlayid]>` for the main branch of a repo,
- `<did:ng:o:[repoid]:v:[overlayid]:b:[branchid]>` for a specific branch of a repo,
- `<did:ng:o:[repoid]:v:[overlayid]:a:[name]>` for a named branch or named commit,
- `<did:ng:o:[repoid]:v:[overlayid]:c:[commitid]>` for a specific HEAD (the :c: part can repeated if we want a multi-head).
- `<did:ng:o:[repoid]:v:[overlayid]:n:[readtoken]>` for a query with readtoken (to a repo, branch or store)
- for SPARQL Update, the named graph can only have the forms :
- `<did:ng:o:[repoid]:v:[overlayid]>` for the main branch of a repo, or
- `<did:ng:o:[repoid]:v:[overlayid]:b:[branchid]>` for a specific branch of a repo,
- `<did:ng:o:[repoid]:v:[overlayid]:a:[name]>` for a named branch.
- it is not possible to create new documents with the SPARQL API, the App API should be used instead.
- apps have access to a `<>` graph name (BASE) which represents the doc attached to the App instance
when no default graph is included in the Query (with USING, GRAPH, FROM), the target passed in the App API call is used. Target URIs given in the following table can also be used in the graph part of a SPARQL query. Target can have the form :
#### Targets
| Target | URI | explanation |
| ---------------- | -------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| | | |
| UserSite | did:ng:i | whole dataset of the user (User Storage) personal identity |
| ProtectedStore | did:ng:b | idem for protected stor |
| PrivateStore | did:ng:c | idem for private store |
| AllDialogs | did:ng:d | union of all the Dialogs of personal identity (all the DMs) |
| Dialog(String) | did:ng:d:[name] | shortname given locally to a Dialog (i.e. “bob", “alice") |
| | did:ng:d:a/b/g:[ProfileID] | a Dialog with a specific ProfilerId |
| AllGroups | did:ng:g | union of all group Stores and all their documents |
| Group(String) | did:ng:g:[name] | shortname given locally to a Group (i.e. “birthday_party", “trip_to_NL") |
| | did:ng:g:g/o:[storeid] | union of all the documents of a specific group store |
| Identity(UserId) | did:ng:i:[userid] | search inside the User Storage of another Identity of the user, that is present in their wallet and has been opened. all the URI described here (except :i) can be used as suffixes behind `:i:[xxx]` |
| | did:ng:i:n:[name] | same as above but with the shortname for the identity (i.e "work“ ) |
| a document | did:ng:o | |
for all the above URIs that are stores, adding a trailing :r will restrict the graph to the root document of the store.
#### Special branches
| Branch target | URI suffix | explanation |
| ------------- | ---------- | ------------------------------------------------------------------------------ |
| | | |
| Chat | :h | internal chat accessible to members |
| Stream | :s | feed of activity, following, reactions sent, new content advert, repost, boost |
| Comments | :m | all the comments on the document |
| BackLinks | :v | mentions, followers, inverse relationships, reactions received and backlinks |
| Context | :x | contains the JSON-LD context (prefixes → link to ontologies) |
| Follower | :h | list of followers |
| Following | :y | list of following |
#### context and ontologies
As shown above, there is a Context branch in each Document.
What is it for ?
the Context branch is an RDF mini file that contains some triples describing the JSON-LD Context. it uses the [JSON-LD Vocabulary](https://www.w3.org/ns/json-ld), and can be retrieved in its usual JSON-LD format.
It expresses the mapping of prefixes to the URI of the ontologies used in the document. It is common to all branches of the document. In every Header of a Document, the predicate `<ng:x>` contains as object the full link (DID cap) needed in order to open the branch and read the context. If the same context is shared across documents, then this predicate can point to a common context branch that sits outside of the document. In this case the context branch can be empty.
In order to see and modify the triples of the context, the suffix `:x` should be added at the end of the URI of the document, as this is the shortcut to access the context branch.
#### binary files
Binary files can be uploaded to an overlay, and then linked to a specific branch of a repo.
Their content is deduplicated at the level of the overlay.
If 2 branches/docs use the same binary, they should add it both in their respective branch (with AddFile). The content will only be uploaded and downloaded once.
Once this is done, there are 4 cases on how to use those files within the branch :
- inside a rich text document, the file can be embedded (image, video, svg)
- inside a JSON/XML data document, the file can be “linked” by just entering its URI in a string value.
- inside the RDF document, the file can be linked as subject or object in a triple, by example to add a profile picture with the predicate `<ng:j>` which is equivalent to `<http://www.w3.org/2006/vcard/ns#photo>` or `<http://xmlns.com/foaf/0.1/img>`
- if the file has to appear as attachment (downloadable) then it must be added with an `<ng:attachment>` predicate.
All the files of a branch (added with the AddFile commit) will be listed by the meta predicate `<ng:f>`
#### System ontology
| predicate | type | format | description |
| ------------- | ------------ | ------------------------------------------ | ------------------------------------------------------------------------------------------------------------------ |
| | | | |
| ng:main | Nuri | did:ng:r:[readcap] | main branch (from the header branch) |
| ng:header | Nuri | did:ng:r:[readcap] | header branch (from any branch) |
| ng:access | Nuri | did:ng:r:[readcap] | readcap of a branch, gives option to <br/>subscribe to topic and retrieve<br/> all the content (domain did:ng:o:b) |
| | | did:ng:n:[readtoken] | same but without subscription possible |
| | | did:ng:s:[permacap] | a permacap : a traditional capability (like z-caps) |
| | | did:ng:k:[key]:h:[topicid] | key and optional topicid of a commit |
| ng:revision | Nuri | did:ng:c:[]:c:[] | tells we are interested in a specific HEADS |
| | | did:ng:a:[name] | idem with a specific named branch or commit |
| | | did:ng:b:[branchid] | idem with a specific branch ID |
| ng:overlay | Nuri | did:ng:v:[overlayid] | on which overlay ID (always outer)<br/> is the doc or object |
| ng:inner | Nuri | did:ng:w:[overlaylink] | used by members to join the<br/> inner overlay |
| ng:locator | Nuri | did:ng:l:[locator] | list of peers to contact in order<br/> to join the overlay |
| ng:inbox | Nuri | did:ng:p:[inboxInfo] | the details of inbox |
| ng:includes | Nuri<br/>URL | did:ng:[o/a/b/c]:[...] or http... | the doc/branch/commit/rdfs:Resource<br/> should be included, fetched if needed,<br/> and its named graph included |
| ng:subscribe | Nuri | did:ng:[o/a/b]:[...] | the referenced doc/branch should be<br/> subscribed to, using ng:access |
| ng:refresh | integer | | for includes that are http(s)<br/> resources, the refresh interval |
| ng:cache | integer | | for includes that are http(s) resources,<br/> the timeout before clearing cache.<br/> (conflicts with ng:refresh) |
| ng:federate | Nuri | | the remote Nuri will be included<br/> in the graph traversals |
| ng:attachment | Nuri | did:ng:j:[]:v:[] | a link to a file that is an attachment<br/> to the document |
| ng:dialog | Nuri | did:ng:d:i:[dialogid] | the dialog of a contact |
| ng:content | string | | content of a comment or message<br/> (domain is did:ng:o:t) |
| ng:correction | string | | correction of a comment or message<br/> (domain is did:ng:o:t) |
| ng:comment | Nuri | did:ng:o:b:[branch]<br/>did:ng:o:t:[token] | branch containing a comment<br/> on doc, or commit node |
| ng:message | Nuri | did:ng:o:b:[branch]<br/>did:ng:o:t:[token] | branch containing a message in chat,<br/> or commit node |
| ng:replyto | Nuri | did:ng:o:b:[branch]<br/>did:ng:o:t:[token] | comment or message is in reply to <br/>another comment or message |
| ng:stores | Nuri | did:ng:o | a store stores a document<br/> (auto-generated) |
| ng:site | Nuri | did:ng:o | to link the public store from<br/>the protected store (optional) |
| ng:protected | Nuri | did:ng:o:v:r:l | link to protected store main branch,<br/> by example from a follow request |
| ng:follows | Nuri | did:ng:o | profile that is followed. needs <br/> :access, :overlay, :locator |
| ng:index | Nuri | did:ng:o:r | entrypoint of a service or app |

@ -4,4 +4,42 @@ description: How permissions work in NextGraph, based on cryptographic capabilit
layout: ../../../layouts/MainLayout.astro
---
Please bare with us as we are currently writing/publishing the documentation (23 of August until 28 of August 2024).
As explained already in the [Nuri](/en/framework/nuri) chapter, all permissions are handled in NextGraph with cryptographic capabilities.
When the user shares a link to a document or to a branch, this link (Nuri) contains all the capabilities needed in order to open and read this document, and optionally, to modify it.
Those capabilities are composed of :
- the RepoID of the document
- a read capability on the branch.
- If it is the root branch, then access to the whole document and all its branches will be granted.
- If it is a transactional branch (Header, Main, a block, a fork, etc...) then only this specific branch will be accessible.
- the overlay ID. see the [Network](/en/network) chapter about overlays.
- the locator which is a list of brokers that can be contacted in order to join the overlay and access the topics. Read more on topics in the [sync protocol](/en/protocol) chapter.
If the user receives write access to the repo, then the capability will contain also the ReadCap of the inner overlay.
This inner overlay is were al the updates are happening, and the ReadCap is needed in order to enter this overlay.
The editor will also need to have their UserId added to the internal list of editors within the repo, and they will also need to receive the `BranchWriteCapSecret` that is needed in order to publish in the topic's pub/sub.
All of this is done transparently and the only thing needed is to go to "Permissions" in the Document menu, and then share the generated Nuri to the new editor.
An API will also be provided for permission manipulation.
Adding permissions can be done offline (as long as the new user can synchronize), and is an asynchronous operation.
But removing permissions is a synchronous operation that requires a SyncSignature (signed by the quorum).
more on that soon...
### Authorization & Capability Delegation
Apps developed with our SDK are treated as _untrusted_ by the Verifier, and will therefor need to provide all the capabilities for the Documents they intend to read or write.
As apps are considered as viewers and editors, an App instance always runs with an associated document, where the data generated by the app can be saved.
If the app needs access to other documents of the User, the capabilities can be saved by the app in the current document that the app instance has been associated with.
The Verifier, which has access to that associated document, will automatically use those capabilities for any API access made by the app. The programmer can also attach some additional capabilities to every API call.
more on that feature once it will be implemented...

@ -4,4 +4,6 @@ description: Roadmap of future improvements on the Framework of NextGraph
layout: ../../../layouts/MainLayout.astro
---
Please bare with us as we are currently writing/publishing the documentation (23 of August until 28 of August 2024).
We just finished a [roadmap](https://nextgraph.org/roadmap) of one year and half that was funded by NLnet and NGI, and that led to the publication of the first alpha release of NextGraph and all the Apps (web, linux, macOS, windows and android).
The next roadmap will be published in october 2024. We are still preparing it at the moment. It will contain several milestones related to the framework. Stay tuned!

@ -33,17 +33,79 @@ A new ontology can be defined by creating a new Document of type Data / Ontology
### ng ontology
| predicate | R/W | type | label | comment | equivalent |
| --------- | --- | ------------- | ------- | ----------------------------- | -------------------------------------------- |
| | | | | | |
| ng:a | RW | string | about | short description | rdfs:comment as:summary |
| ng:b | R | string | weblink | link for sharing over the web | example https://nextgraph.one/#/did:ng:o:... |
| ng:c | R | rdfs:Class | class | primary class | rdf:type |
| ng:d | R | NURI | inbox | inbox of repo | as:inbox |
| ng:e | RW | rdfs:Resource | extlink | http external link | |
There is a special prefix ng: for the NextGraph ontology (not to be confused with the did:ng method). It is available in all RDF documents and cannot be overridden by other prefixes/context.
It has a list of predicates that help manage the Documents. It is also a way for us to offer a metadata API on each document, that can be queries with SPARQL. This API automatically generates some virtual triples about the document. let's have a look more in details about them.
| predicate | R/W | type | label | comment | equivalent |
| --------- | --- | ------------- | ---------- | -------------------------------------------- | ------------------------------------------------------- |
| | | | | | |
| ng:a | RW | string | about | short description | rdfs:comment<br/> as:summary<br/>og:description |
| ng:b | R | string | weblink | link at nextgraph.one | |
| ng:c | R | rdfs:Class | class | primary class | rdf:type |
| ng:e | RW | rdfs:Resource | extlink | http external link | og:url <br/> as:url |
| ng:f | R | Nuri | file | a linked binary file | |
| ng:g | R | Nuri | nuri | nuri of self | |
| ng:h | R | Nuri | follower | branch containing followers | as:followers |
| ng:i | R | Nuri | inbox | inbox of repo ng:p | as:inbox |
| ng:j | RW | Nuri | image | default image | as:image<br/> og:image <br/> vcard:photo <br/> foaf:img |
| ng:k | RW | keyword | keyword | list of related to | |
| ng:l | RW | langString | lang | language BCP47 | og:locale <br/> rdf:language |
| ng:m | R | Nuri | comment | comment branch | |
| ng:n | RW | string | title | title or name | rdfs:label <br/> as:name <br/> og:title |
| ng:o | R | Nuri | viewer | list of viewers | |
| ng:p | R | TBD | permission | list of permissions | |
| ng:q | R | Nuri | qrcode | image of QR-code <br/>containing ng:b | |
| ng:r | R | Nuri | store | store header branch | |
| ng:s | R | Nuri | stream | stream branch of store | |
| ng:t | RW | dateTime | time | date and time | |
| ng:u | RW | Nuri | icon | favicon image | as:icon |
| ng:v | R | Nuri | backlinks | backlinks branch | |
| ng:w | R | Nuri | editor | list of editors | |
| ng:x | R | Nuri | context | context branch for <br/>JSON-LD and prefixes | |
| ng:y | R | Nuri | following | branch containing following | as:following |
| ng:z | R | Nuri | service | list of services | |
| ng:loc | RW | TBD | location | location (country, <br/>geoname, coordinate) | |
`Nuri` is a subClass of `rdfs:Resource`
Apart from `ng:f`, `ng:g`, `ng:p` and `ng:q`, all the other predicates sit in the `Header` branch.
`ng:c` also sits in every block branch.
### official primary classes
see [Features](/en/features) for a list of all official primary classes
### common prefixes available in NextGraph
| prefix | resolves to |
| ------------- | ------------------------------------------------- |
| rdf: | http://www.w3.org/1999/02/22-rdf-syntax-ns# |
| rdfs: | http://www.w3.org/2000/01/rdf-schema# |
| xsd: | http://www.w3.org/2001/XMLSchema# |
| owl: | http://www.w3.org/2002/07/owl# |
| sh: | http://www.w3.org/ns/shacl# |
| shex: | http://www.w3.org/ns/shex# |
| skos: | http://www.w3.org/2004/02/skos/core# |
| schema: | https://schema.org/ |
| foaf: | http://xmlns.com/foaf/0.1/ |
| relationship: | http://purl.org/vocab/relationship/ |
| dcterms: | http://purl.org/dc/terms/ |
| dcmitype: | http://purl.org/dc/dcmitype/ |
| as: | https://www.w3.org/ns/activitystreams# |
| ldp: | http://www.w3.org/ns/ldp# |
| vcard: | http://www.w3.org/2006/vcard/ns# |
| og: | [http://ogp.me/ns#](https://ogp.me/ns/ogp.me.ttl) |
| cc: | http://creativecommons.org/ns# |
| sec: | https://w3id.org/security# |
| wgs: | http://www.w3.org/2003/01/geo/wgs84_pos# |
| gn: | https://www.geonames.org/ontology# |
| geo: | http://www.opengis.net/ont/geosparql# |
| time: | http://www.w3.org/2006/time# |
### well known Ontologies by domain
TBD
You can have a look at [Awesome ontology](https://github.com/ozekik/awesome-ontology#ontologies-and-vocabularies) and [LOV Linked Open Vocabularies](https://lov.linkeddata.es/) in the meanwhile

@ -61,7 +61,7 @@ Then of course, we attach a nice and easy-to-read text label to each resource, s
As you can see, the predicate's names are often written with 2 words separated by a colon like `rdfs:label`. This means that we are referring to the prefix `rdfs` and to the fragment `label` inside it. The prefix `rdfs` must have been defined somewhere else before, and it always points to a full URI that contains the ontology.
In the classical semantic web, this URI is a URL, in NextGraph it is a NURI (a NextGraph DID URI) or it can also be a URL if needed.
In the classical semantic web, this URI is a URL, in NextGraph it is a Nuri (a NextGraph DID URI) or it can also be a URL if needed.
So this "file" that contains the ontology, most often in the format OWL, which is also some RDF, that describes the classes, properties, and how they can be combined (which properties belong to which classes, the cardinality of relationships, etc).
@ -141,7 +141,7 @@ The included block can be from the ame document, from another document in the sa
And this leads us to an explanation about what happens to named graphs in NextGraph.
Named Graphs are an RDF and SPARQL feature that lets you organize your triples into a bag. This bag contains your triples, and we can this bag a Graph. it also gets an ID in the form of a URI (URL normally. In NextGraph, a NURI).
Named Graphs are an RDF and SPARQL feature that lets you organize your triples into a bag. This bag contains your triples, and we can this bag a Graph. it also gets an ID in the form of a URI (URL normally. In NextGraph, a Nuri).
SPARQL has options to specify which named graph(s) you want the query to relate to.
@ -169,7 +169,7 @@ Then we have other use cases for extra triples in the Document :
- fragments, that are prefixed with the authoritative ID, and followed by a hash and a string (like in our previous example `#label`).
- blank nodes that have been skolemized. They get a NURI of the form `did:ng:o:...:u:...`. this is because blank nodes cannot exist in a local-first system, as we nee dto give them a unique ID. This is done with the [skolemization procedure](https://www.w3.org/TR/rdf11-concepts/#section-skolemization). For the user or programmer, skolemization is transparent. You can use blank nodes in SPARQL UPDATE and they will be automatically translated to skolems. For SPARQL QUERY anyway, blank nodes are just hidden variables, so there is no impact.
- blank nodes that have been skolemized. They get a Nuri of the form `did:ng:o:...:u:...`. this is because blank nodes cannot exist in a local-first system, as we nee dto give them a unique ID. This is done with the [skolemization procedure](https://www.w3.org/TR/rdf11-concepts/#section-skolemization). For the user or programmer, skolemization is transparent. You can use blank nodes in SPARQL UPDATE and they will be automatically translated to skolems. For SPARQL QUERY anyway, blank nodes are just hidden variables, so there is no impact.
But those extra triples (fragments and skolems) are all prefixed with the authoritative ID, so they are considered authoritative too.

@ -4,4 +4,4 @@ description: Run services in the background that can manipulate your data or the
layout: ../../../layouts/MainLayout.astro
---
Please bare with us as we are currently writing/publishing the documentation (23 of August until 28 of August 2024).
Services are not implemented yet. Stay tuned for some updates.

@ -4,4 +4,40 @@ description: Signatures of the content of Documents let you verify and proove th
layout: ../../../layouts/MainLayout.astro
---
Please bare with us as we are currently writing/publishing the documentation (23 of August until 28 of August 2024).
As we have already seen in previous chapters, there are 2 main mechanisms of signatures in NextGraph :
- each commit is signed by its author, in order to prove authenticity of the data, and to verify permissions. Only authorized editors can add certain types of commits. This signature is only available to other editors in the Repo (because the USerId is hashed and only the editors know who is part of the list of editors). This signature is useless to anybody outside the Repo (external readers)
- we added a mechanism of threshold signature, that can be used in async or sync mode, and that involves another type of users: the signers. Editors and Signers do not have to be the same users within a repo. By example, signers can be some _observers_ that are not capable of editing the document, but hat will guarantee the integrity of the repo. Those threshold signatures are not systematic, and need to be requested on a case by case basis. We distinguish 2 types of threshold signatures :
- asynchronous : used for CRDT operations. The request for signature must happen **after** the commit has been attached to the DAG and sent in the pub/sub.
- synchronous : this signature will add some guarantees on finality, sequence, and consistency of the transaction. The request for signing happens **before** the commits are added to the DAG or sent in the pub/sub. In fact, the commits are **embedded** with the signature, and cannot be seen by others if the signature hasn't been validated first. As requesting and obtaining the signature is a synchronous operation, at least all the quorum needs to be online before the transaction can be accepted. For this reason, signers of the synchronous quorum, should rather be online often. (async and sync signatures have 2 distinct quorums, for that reason).
We will only deal with threshold signatures here in this chapter.
What is implemented so far:
The user can ask for the commits at the HEAD (the latest commits received by the replica) to be signed, if they are not signed already. This will be an async signature.
Alternatively, the user can ask for a snapshot to be taken first (which will merge all the current HEADs) and then to ask for this snapshot to be signed.
The snapshot can then be shared with external users by using its Nuri, and the external user will be able to read it, and also to verify its signature.
A snapshot contains the integrality of the content of the branch (both the discrete and graph parts) serialized as JSON.
The signature contains a certificate, that is a chain of proofs that has at its root, the RepoID (which is a public key). This way, the snapshot can be authenticated as authored by the editors of the Repo/Document.
The same goes for the HEAD commits that get signed.
When the external users retrieves those commits (or the snapshot), they use the Ext Protocol, which does not require any authentication from the user.
As a security measure, the commits and any block received on the Ext Protocol, will be stripped from their header. It means that it will be impossible to access the causal past of the commits, and impossible to reconstruct the DAG.
This security feature is important, as external users should only be able to read what has been shared with them. If a commit has been shared, then only the transaction within the commit can be read. If a snapshot, then all the current content of the branch can be read, but it cannot be subscribed to for future updates.
If the user should be able to subscribe to the topic of the branch in order to receive updates, then a ReadCap of the branch should be share with them instead. Those branch ReadCap do not need to be signed with threshold signature because they are already signed internally with at least the first commit of the branch (the `Branch` commit) that has been signed with a synchronous signature when the branch was created. And because the user will be able to reconstruct the whole DAG up to this first commit (by doing the a sync operation) they will be able to verify the authenticity of the branch. If the authenticity of the content should also be verifiable, then an async signature must be added at the HEADs, and the user who got the branch ReadCap will get it too, and will be able to verify the authenticity of the content as well.
For now, the ext protocol and the reading of async signatures, snapshots, and commits, can be done with the CLI. Soon, APIs in the SDK will also be added.
More work is needed on the signature feature in general, stay tuned!

@ -4,4 +4,4 @@ description: Define and use Smart Contracts to manage multi-party business proce
layout: ../../../layouts/MainLayout.astro
---
Please bare with us as we are currently writing/publishing the documentation (23 of August until 28 of August 2024).
Smart Contract are not implemented yet. Stay tuned for some updates.

@ -4,4 +4,46 @@ description: Mutate your data with Async and Sync transactions inside a Repo
layout: ../../../layouts/MainLayout.astro
---
Please bare with us as we are currently writing/publishing the documentation (23 of August until 28 of August 2024).
We want to address the question of the database and consistency model.
We have seen earlier that updates on the data happen inside Commits. Each commit represents a transaction, that is executed atomically when received by a replica. Those transaction are called **asynchronous**.
### Synchronous transaction
As we explained above, NextGraph is based on CRDTs and offers strong eventual consistency guarantees.
That's a paradigm shift, and developers will have to learn how to code new applications that access the data locally instead of remotely.
There is plenty of work needed in order to adapt the many open source applications already existing, so they can fit the CRDT mechanism. For apps that are oriented towards content editing, CRDT would be a perfect match and it will bring the feature of live/real-time collaboration and automatic syncing to any such app and content editor (as long as the data format can be based on JSON or XML or RDF).
That would work well for everything that is content/document oriented.
But in some cases, the App is dealing with some business logic that needs ACID properties on the underlying database.
By example, any e-commerce app that is selling something online, needs at some point (when checkout is happening and stocks need to be checked and updated) to be able to run an `atomic` and `immediately consistent` transaction on the database. CRDTs are everything but that.
It wouldn't make sense that developers using our Framework would need to install and interface with a Postgresql server for any transaction that should be ACID.
For this reason, we have integrated a specific feature into the Repo mechanism, that lets the developer create a Synchronous Transaction in a document (a document that should be understood as a database). Those types of transaction have higher guarantees than the Asynchronous transactions based on CRDTs, as they guarantee the **finality** of the transaction, which means that once this commit has been accepted by a certain threshold of Signers, it is guaranteed that no other concurrent transaction has happened or will ever happen that would conflict with or invalidate the invariants. Therefor the commit is final.
Basically what we do is that we temporarily prevent any fork in the DAG while the transaction is being signed (we also call that “total order”. it creates a temporary bottle neck in the DAG) and the threshold that must be above 50%, guarantees that any concurrent or future fork will be rejected. Supermajority can be used to calculate the threshold, in cases where Byzantine or faulty users have to be considered. (given N = all users, F = byzantine and/or faulty users, supermajority = (N+F)/2 +1).
This is synchronous because it requires that the quorum of signers is online at the same time, and can agree on the transaction.
For simple use cases, a single Signer can be enough, as long as we know it will be always online (and it can run by example on a self-hosted broker or in our SaaS). For the stock management of a small online retail business, it will most probably be enough. This unique signer will act as an ordering service that will force a total order on the transactions.
If more reliable setups need to be created, a pool of signers can be added, with a quorum that is only 70% of the pool, by example. This setup allows some of the signers (30%) to fail or be offline, while still being able to provide the total order guarantees.
Furthermore, if the integrity of the database (understand, the Document) needs to be checked and asserted by several parties with conflicting interests, then those parties have to setup one Signer each, and the quorum should be 100%.
It is also possible to configure the Repo to have parts of its data subjected to the total order requirement (ACID), while other parts of the data being subjected to partial order (CRDTs).
In addition to traditional DBMS ACID business logic required by most e-commerce applications, this feature also enables the creation of Smart Contract that can represent a Finite State Machine (FSM) that is distributed among participants, and signed by a quorum of those participants. More on that in the [Smart-Contract](/en/framework/smart-contract) chapter.
There is a limitation : synchronous transactions cannot cross the Document boundaries.
For this purpose, we have designed a mechanism of cross-document handshake, where 2 Documents exchange messages (commits) that are signed by their respective quorum, and this mechanism enables **cross-document synchronous transactions**. You'll find more on this topic in the [Smart-Contract](/en/framework/smart-contract) chapter.
This feature is very important, as it will enable Documents to hold data that has ACID requirements, mostly used in business logic. It complements well the content oriented feature of CRDTs, while providing a unified Framework for developers who can opt for total-order or partial-order according to their needs in term of consistency and finality.
Synchronous transaction are already implemented and we use them for managing permissions, and compact operation. But they are not yet available in the API. Stay tuned!

@ -4,4 +4,6 @@ description: NextGraph is compatible with the Solid Platform and Protocols, than
layout: ../../layouts/MainLayout.astro
---
Please bare with us as we are currently writing/publishing the documentation (23 of August until 28 of August 2024).
Thanks to our collaboration with the [ActivityPods project](https://activitypods.org) we will soon offer full compatibility with the [Solid standard](https://solidproject.org/). At the same time, all apps developed on ActivityPods framework will be compatible with NextGraph.
Stay tuned!

@ -4,9 +4,77 @@ description: The Verifier is decrypting the data and materializing the state of
layout: ../../layouts/MainLayout.astro
---
## Remote Verifier
The general architecture we have seen so far is composed of brokers and clients.
Here are 3 main use-cases for the remote verifier :
The client publishes into and subscribes to some topics, that represent branches, and receive the new commits.
Those commits are encrypted, and only the clients that possess the ReadCap, can decrypt them.
Decrypting the commits is the job of the Verifier, which is running inside the App.
The verifier takes the commits one by one, respecting the causal order that links them one to another (the earliest causal past is processed first), and applies the CRDT updates/patches that each commit contains.
Eventually, the full content of the document is reconstructed, and we call this the "**materialized state**“ of the document.
Inside the web-app, the Verifier has to replay again all the commits of a branch, one by one, every time it wants to have access to the content of a document (after a refresh of the page, or the first time the doc is opened, after each new login).
This is due to the fact that the webapp does not have a “User Storage” yet and it keeps the materialized state only in memory.
To the contrary the native apps have a User Storage and can save the materialized state to disk. When a new commit arrives and needs to be processed, the Verifier just opens the materialized state that was stored locally, and adds the last commit into it. It doesn't need to reprocess all the commits like the webapp does.
The limitation on webapps is due to the fact that we use RocksDb to store the materialized state, and that we do not have yet a version of RocksDb that works in the browser, but this feature will be added in the future.
Also it should be noted that all the data stored in the User Storage is encrypted at rest. As you understood, the materialized state is a plaintext aggregation of all the commits. For this reason, it has to be encrypted again before being saved. The encryption used is not the same as the one for the commits. It is RocksDb itself that encrypts transparently all the records (thanks to a plugin that we implemented). The encryption key is stored in the wallet and is unique by device.
### App API
The Verifier offers an API to the App, with which the App can read and write the data, and also query the RDF graph.
This API is present locally in :
- the native Apps (based on Tauri)
- a Tauri plugin that developers can use to create Tauri-based apps based on NextGraph Framework (not ready and not planned yet)
- the Rust library (crate “nextgraph")
- the CLI
- the ng-sdk-js library, for developers who want to develop their own apps with front-end frameworks like Svelte, React, Vue, Angular, etc…
- the ng-sdk-node library (npm package called “nextgraph”), that can be used in nodeJS or Deno in order to access the data of documents in backend services.
The 2 Javascript libraries do not have a User Storage so they only support in-memory Verifiers.
As the feature of the “User Storage for Web” will take some time to be coded, so we offer another way to solve the problem of volatile materialized state in JS.
There is also the idea of having a full fledged Verifier running inside nodeJS. this would use the NAPI-RS system which compiles the Rust code of the verifier, together with the RocksDb code, and make it a binary library compatible with nodeJS that would run inside the nodeJS process. This also will take some time to be coded.
Instead for both cases (JS in web and in nodeJS) we offer the App API that connects to a remote Verifier.
The JS libraries can then connect to such remote Verifier and use the full set of functionalities, without a need to replay all the commits at every load.
Where can we find remote verifiers ? in `ngd`, the daemon of NextGraph.
Usually `ngd` only acts as a broker, but it can be configured and used as a Verifier too.
in which use cases is it useful ?
- when the end-user doesn’t have a supported platform where to install the native app. by example, a workstation running OpenBSD or FreeBSD, doesn't have a native app to download (and cannot be compiled neither as Tauri doesn't support such platform). In this case, the end-user has to launch a local ngd, and open the webapp from their browser (http://localhost:1440). The verifier will run remotely, inside ngd (that isn't very far. it is on the same machine). Because it is on localhost or in a private LAN, we do allow the webapp to be served on http (without TLS) and the websocket is also working well without TLS. But this doesn't work anymore if the public IP of the ngd server is used.
- when a nodeJS service needs access to the documents and does not want to use the in-memory Verifier, because it needs quick access (like a headless CMS, Astro, an AI service like jan.ai, or a SPARQL REST endpoint, an LDP endpoint, etc..) then in this case, an ngd instance has to run in the same machine as the nodeJS process, or in the same LAN network (Docker network by example).
- in the headless mode, when a server is using ngd as a quadstore/document store and the full credentials of the user identity has been delegated to that server. This is the case for ActivityPods, by example.
- on the SaaS/cloud of NextGraph, we run some ngd brokers that normally would not have any verifier. But in some cases, at the request of the end-user, we can run some verifiers that have limited access to some documents or stores of the user. if they want to serve their data as REST/HTTP endpoints, by example. The end-user will have to grant access about those resources to this remote verifier, by providing their DID capabilities. A Verifier can see in clear all the data that it manipulates, so obviously, users have to be careful where they run a Verifier, and to whom they give the capabilities.
What is important to understand is that the Verifier needs to run in a trusted environment because it holds the ReadCaps of the documents it is going to open, and in some cases, it even holds the full credentials of the User Identity and has access to the whole set of documents in all stores of the user.
The verifier is the terminal point where the E2EE ends. After that, all the AppProtocol is dealing with plaintext data.
For this reason, the Verifier should normally only run in a computer or device that is owned and controlled by the end-user.
### Remote Verifier
- A specific user wants to run a remote verifier on the server instead of running their verifier locally. This is the case for end-users on platforms that are not supported by Tauri which powers all the native apps.
The end-user on those platforms has to run a local ngd daemon instead, and access the app in their browser of choice, at the url http://localhost:1440 . Here the breaking of E2EE is acceptable, as the decrypted data will reside locally, on the machine of the user.
@ -16,10 +84,27 @@ Here are 3 main use-cases for the remote verifier :
The rest of the "session APIs" can be used in the same manner as with a local Verifier. This present JS library connects to the server transparently and opens a RemoteVerifier there.
The remote session can be detached, which means that even after the session is closed, or when the client disconnects from ngd, the Verifier still runs in the daemon.
This "detached" feature is useful if we want some automatic actions that only the Verifier can do, be performed in the background (signing by example, is a background task).
When the NGbox will be available, this feature of "detached verifier" will be used extensively, as the goal of the NGbox is to provide the user with an ngd daemon running 24/7 in a trusted environment (owned hardware located at home or office) that has full access to the decrypted data.
- The second use case is what we call a Headless server (because it doesn't have any wallets connecting to it). It departs a bit from the general architecture of NextGraph, as it is meant for backward compatibility with the web 2.0 federation, based on domain names and without E2EE.
This mode of operation allows users to delegate all their trust to the server. In the future, we will provide the possibility to delegate access only to some parts of the User's data.
In Headless mode, the server can be used in a traditional federated way, where the server can see the user's data in clear, and act accordingly. We have in mind here to offer bridges to existing federated protocols like ActivityPub and Solid (via the project ActivityPods) at first, and later add other protocols like ATproto, Nostr, XMPP, and even SMTP ! Any web 2.0 federated protocol could be bridged. At the same time, the bridging ngd server would still be a fully-fledged ngd daemon, thus offering all the advantages of NextGraph to its users, who could decide to port their data somewhere else, restrict the access of the server to their own data, interact and collaborate with other users (of the federation or of the whole NextGraph network) in a secure and private way, and use the local-first NG app and access their own data offline.
- A third use case will be to be able to run some services (in nodeJS or Rust) that have received partial access to the user's data, and can process it accordingly. By example, an AI service like jan.ai, or a SPARQL REST endpoint, an LDP endpoint, an endpoint to fetch data that will be displayed by a headless framework like Astro or any other REST/HTTP endpoint to access some of the user's data.
All of those use cases are handled with the present nodeJS library, using the API described [here](https://www.npmjs.com/package/nextgraph).
### Client Protocol
The Verifier talks to the Broker with the ClientProtocol, and receives the encrypted commits via this API. It also subscribes and publishes to the Pub/Sub with that API.
Then, it exposes the AppProtocol to the Application level, this is what is used to access and modify the data.
Sometimes the Verifier and the Broker are on the same machine, in the same process, so they use the LocalTransport which is not even using the network interface. That’s the beauty of the code of NextGraph. it has been thought from the beginning with many use cases in mind.
Sometimes the Verifier and the App are in the same process, sometimes they need a websocket between them. But all of this are implementation details. For the developers, the same API is available everywhere, in nodeJS, in front-end Javascript, in Rust, and similarly, as commands in the CLI, regardless of where the Verifier and the Broker are actually located.
In some cases, a broker (ngd) will run. let's say on localhost or within a LAN network, and will not be directly connected to the core network. This can happen in the following schema. This is called a Server Broker, and it doesn’t join the core network. Instead, it needs to establish a connection to a CoreBroker that will join the core network on its behalf. It will use the ClientProtocol for that, in a special way called “Forwarding", as it will forward all ClientProtocol request coming from the Verifier(s), to another broker called the CoreBroker. It will keep local copies of the events, and manage a local table of pub/sub subscriptions, but will not join overlays by itself. This will be delegated to the CoreBroker(s) it connects to.
This Forwarding Client Protocol is not coded yet (but it is just an add-on to the ClientProtocol).
Also, the Relay/Tunnel feature is not finished. But very few tasks remain in order to have it running.
Finally, the CoreProtocol, between Core brokers, has not been coded yet, and will need more work. It implements LoCaPs algorithm for guaranteeing partial causal order of delivery of the events in the pub/sub, while minimizing the need for direct connectivity, as only one stable path within the core network is needed between 2 core brokers, in order to guarantee correct delivery.

Loading…
Cancel
Save