Initial steps to deploy and configure the Data Access Components

4 minute read


All services and agents are run using docker containers. Therefore, a running Docker daemon is needed. Official installation guides can be found here Docker Installation instructions

When developing new agents, a Python 3.7 or higher is required. The lates version can be downloaded from Python Official Website

If you plan to extend the REST API, you will need a development environment for Node.js v16 or higher. You can download the latest Node from Node.js Official Website. As a development tool, Visual Studio Code is recommended, but any other IDE supporting JavaScript can be used.


Enabling access to the internal Docker Engine Rest API

The DAM’s Rest API is able to handle the lifecycle of agents by launching them as docker containers. To do so, the Docker Engine API has to be exposed over the TCP protocol, as this access is restricted by default for security reasons.

If Docker Desktop is available on the machine running the DAM’s API, open the settings and make sure the Expose daemon on tcp://localhost:2375 without TLS option is checked.

If no Docker Desktop is available, follow this guide Dockerd reference to run the docker daemon with support for listening to the TCP port.

Data Access Manager installation

The different services that are required to execute the DAM can be bootstrapped using a docker compose file available if the DAM Deployment repository.

The docker-compose file will compile all components from the latest source code available in the different repositories in GitHub (see Source Code section for additional details).

Once all repositories are cloned locally (respecting the naming conventions and relative paths mentioned in the DAM Deployment repository readme file), review the different environment variables that can be set in the docker-comopose.yml file, especially KEYCLOAK_URL, which points to the IAM server providing the authentication (see Security Guide). The whole stack can be launched using the command:

docker-compose up -d --build

The --build flag will force that docker images are recreated and any previously running container deleted and created with the latest version.

Once the docker daemon finishes building and launching all the ecosystem, the following services will be running:

  • dataports-api: this container hosts the REST API that is used by both agents and the Web UI
  • dataports-ui: this container serves the Web UI
  • mongo: a MongoDB instance used by the Web UI
  • cygnus: this container launches the official Cygnus enabler provided by the Fiware platform. When using on-demand agents, this is the component that will receive data objects and persist them on the mongo_cygnus database
  • mongo_cygnus: a MongoDB instance to store data from the Cygnus component.
  • orion: this container launches the official Orion Context Broker provided by the Fiware platform. When using public/subscribe agents, this component will receive data entities and persist them on a database.
  • mongo_fiware: a MongoDB instance where orion will store data.

Additionally, the following two containers will be in stop state, since they run a database provisioning script and exit:

  • mongo-seed-datamodels: creates data models in the mongo database
  • mongo-seed-templates: creates agent template definitions in the mongo database

Data Access API Endpoints

The DAM’s API service creates a Rest API used by agents and the DAM’s UI. This service exposes an OpenAPI document accessible at http://localhost:3000/api-docs/

The OpenAPI interface contains the different groups shown in the following screenshot:


This section groups all the endpoints used to manipulate docker images implementing agents.


This section contains a unique endpoint used to launch on-demand agents.

NGSI Agent

This section contains all the endpoint used to manage publish/subscribe agents. One or more agent instances can be created, stopped, delete, restarted, etc. There are additionally methods to inspect the docker variable associated to the agent’s container and to download the log traces since a specific date.


This section contains the endpoints needed to manage Orion subscriptions (see Subscriptions are used to register a webhook where certain publish/subscribe data will be forwarded anytime a value is written (or updated) in the context broker.


This section contains endpoints to manipulate some internal entities created by the DAM in the context broker.

Data Model

Endpoint used to retrieve a list of registered data models and add additional ones.


This section contains a unique endpoint used by on-demand agents to signal the DAM that it has finished importing data.

Python Template

In this section we have endpoints to query for available agent templates and download pre-filled agents with the data provided through the DAM’s UI wizard.


This section contains a unique endpoint which is used to obtain JWT tokens from the IAM service. These token are used by the DAM’s UI and by some agents.

Data Access UI

The stack deploys the Web UI at https://localhost:8080. Since the stack deployed in previous steps is not meant for a production environment, the previous URL will throw a certificate validation error. If you plan to use this component linked to a domain name, consider setting a reverse proxy to this port (e.g.,

To gain access to the DAM’s UI it is required to have credentials created at the projects IAM (refer to the Security Guide)

Last modified May 12, 2023: adding dam contents (217a93d)