Getting Started
2 minute read
Contents
Getting Started
For a fast deployment of the whole dashboard, we provide a docker-compose file that launches a set of containers. First, setup Docker following the specific instructions for our SO and, then, install Docker Compose. The deployment configuration is defined in a docker-compose.yml file and it can be deployed using the following standard command:
docker-compose up
We recomend to check the file to understand the redirected ports and the required volumes, which can be configured following the official documentation.
Installation
Frontend
The frontend uses AngularJS and the Argon template. Additionally, it uses the following libraries and components:
Setup with Angular
First, you must install NodeJS version v16.16.0 (LTS) following the instructions for your specific SO. Next you must install Angular using the following command:
npm install -g @angular/cli@14.2.4
Finally, you must access the folder with the source code and execute the following commands:
npm install --legacy-peer-deps
npm start
API Backend
The API Backend is developed using Django, a frameworks which encourages the definition of documentation using Open API.
To execute the backend using Docker, you must use the following command in the same path that the Dockerfile inside the backend folder:
docker-compose up
Training Engine: ML Pipelines
The main dependencies of the Training Engine are ML frameworks and common libraries for data wrangling. The API backend also deploys the training engine but if you want to test it or to use it without the need of an API, you could follow the next instructions.
The environment.yml file lists the dependencies and current versions of the required libraries. We strongly recommend to use a Conda environment with Python 3.8. To create such environment you could install Anaconda or Miniconda (recommended). Once conda installed in your system use the following commands:
conda env create -f environment.yml
conda activate training-engine-env
You could test if everything is working as expected running the test_pipeline.py which generates some trained models (.pkl files) in the outcomes folder.
Contributors
- Francisco Valverde - @fvalverde - Technical Project management
- Miguel Bravo - @mbravo - Data processing and analysis
- Pablo Ruiz Sánchez - @pruiz - Data processing and analysis
- Manuel Sánchez - @msanchez - Web and dashboard development
- Enrique Miravet - @emiravet - Frontend and backend development