Set up MLFlow and Jupyter Server in Ubuntu using Anaconda

This post will show how to set up Anaconda, MLFlow and Jupyter Server in Ubuntu.

Install Anaconda

Download Anaconda how you normally would from https://www.anaconda.com/download or if you prefer to use wget or need a different version, go to https://repo.anaconda.com/archive/ and download your preferred Anaconda version

Once downloaded, run this command to make the .sh file executable, and then execute it as your normal user, not as root.

chmod +x <file_name_here>.sh
./<file_name_here>.sh

It will be installed to /home/<your_user>/anaconda3

Once it is installed, add Anaconda to your path

Anaconda 2 :
export PATH=~/anaconda2/bin:$PATH

Anaconda 3 :
export PATH=~/anaconda3/bin:$PATH


Create a new environment in Anaconda

Create a new environment by running this command

conda create -n myenv anaconda

Or create an environment by specifying a python version

conda create -n myenv python=3.9 anaconda

Activate your conda environment

conda activate myenv

You might get an error saying you need to run “conda init” first. If you do run this command

conda init

Run this command to list your conda environments:
conda env list


Install Conda Navigator

Make sure you are in your base conda environment. If you are still in your virtual environment run

conda deactivate

Your terminal should show (base). If not, run

conda activate

Now run

conda install -c anaconda anaconda-navigator

And then launch it with

anaconda-navigator

To update all anaconda packages run

conda update --all

This will only update the packages in your active environment.


Access Jupyter Server over the network with a password

Activate your custom conda environment.

This approach adds all the config to the “server” configuration (the new way of doing things), and we use the “server” command and config files instead of the “notebook” command and config files.

Create a Jupyter Server config file

jupyter server --generate-config

The config file will be written to /home/<your_user>/.jupyter/jupyter_server_config.py\

Add the following to the bottom of the config file

This will make Jupyter Notebook accessible over the network so it can be accessed from anywhere

c.ServerApp.ip = '0.0.0.0'
c.ServerApp.port = 8888
c.ServerApp.password_required = True
c.ServerApp.open_browser = False

Set a password for jupyter server

# The config goes to /home/<your_user>/.jupyter/jupyter_server_config.json
jupyter server password

To start Jupyter, you will now run this command

jupyter server

Keep the terminal open to keep the Jupyter Server alive.


Automatically start Jupyter using Systemctl

Instead of having to start Jupyter manually every time, we can start Jupyter up automatically using systemctl:

# Create a directory where your projects will be stored
mkdir ~/jupyter_projects

# Get your jupyter executable from your custom conda env
which jupyter

# Create the systemd file
sudo nano /etc/systemd/system/jupyter.service

Now paste the below in your nano window and replace the parts in bold with the correct values:

[Unit]
Description=Jupyter Server
After=network.target

# Replace with your username
[Service]
User=svenml
Group=svenml

# This directory will be the "Root" folder when you open Jupyter. It must match your mkdir command's path above.
WorkingDirectory=/home/svenml/jupyter_projects

# PATH TO JUPYTER from the "which" command.
# We use 'jupyter lab' here as it is the modern standard, 
# but you can change 'lab' to 'server' or 'notebook' if you prefer.
ExecStart=/home/svenml/anaconda3/envs/myenv/bin/jupyter server --ip=0.0.0.0 --port=8888 --no-browser

Restart=always
RestartSec=10

[Install]
WantedBy=multi-user.target

Lets start the service now and make it so it will start on boot:

# Reload systemd to recognize the new service
sudo systemctl daemon-reload

# Enable it to start automatically on boot
sudo systemctl enable jupyter

# Start it right now
sudo systemctl start jupyter

If you need to check the logs, run this command:

journalctl -u jupyter -f

You can launch jupyter with: http:192.168.1.129:8888/lab


Add Intellisense to Jupyter

Make sure your custom conda env is activated and run:

conda install -c conda-forge jupyterlab-lsp python-lsp-server

Then restart your Jupyter server:

sudo systemctl restart jupyter


Jupyter custom Kernel

Because we have our own custom conda environment, we need to make it available in Jupyter otherwise we can’t use our installed packages.

Run these commands in your custom conda environment:

conda activate myenv
conda install ipykernel
python -m ipykernel install --user --name=myenv --display-name "Python (myenv)"

# If you already have Jupyter running as systemctl
sudo systemctl restart jupyter.service


Update your default Jupyter Server notebook_dir location

If you want to see all the current config and data directories for Jupyter, run this command

/srv/jupyter

If you want to update the default location where notebook files are stored, add this to the end of /home/<your_user>/.jupyter/jupyter_server_config.py

c.NotebookApp.notebook_dir = '<your new location here>'


Install MLFLow from Conda Forge

Activate your custom conda environment for these steps.

Look for a package in Anaconda Forge.

conda search -c conda-forge mlflow

Then install it

conda install -c conda-forge mlflow


Start MFLow Web UI

Run this command in a terminal

mlflow server --host 0.0.0.0 --port 8080

0.0.0.0 ensures it is available on your network, not just localhost.


Automatically start MLFlow using Systemctl

I don’t want to manually start up mlflow every time I need it. We can use systemctl to ensure mlflow starts up automatically whenever our server starts.

Create a folder to store your project files
mkdir ~/mlflow_projects (this will be in your user's home folder)

Activate your conda environment where mlflow is installed and run:
which mlflow

The above output will look something like this:
/home/svenml/anaconda3/envs/myenv/bin/mlflow

Create a new systemd config
sudo nano /etc/systemd/system/mlflow.service

Paste the below into your new systemctl config file:

[Unit]
Description=MLflow Tracking Server
After=network.target

[Service]
# Replace with your actual username
User=svenml
Group=svenml

WorkingDirectory=/home/svenml/mlflow_projects

# Point this to the path you found in your which statement above
# Note: We use the full path so we don't need to 'activate' conda
# The "--default-artifact-root mlruns" switch will be deprecated Feb 2026
ExecStart=/home/svenml/anaconda3/envs/myenv/bin/mlflow server \
    --host 0.0.0.0 \
    --port 8080 \
    --backend-store-uri sqlite:///mlflow.db \
    --serve-artifacts \
    --allowed-hosts "*.internal.leighonline.net,localhost:*"

# Automatically restart the service if it crashes
Restart=always
RestartSec=10

[Install]
WantedBy=multi-user.target

–default-artifact-root mlruns = This creates the mlruns folder inside the “mkdir” location that you specified above.

Take note of the user and group sections above, and also the “allowed-hosts” parameter. If you want to run mlflow behind a reverse proxy you need to set this value accordingly.

Now run these commands to ensure mlflow starts up when your system starts and start it up now:

# Reload systemd to recognize the new service
sudo systemctl daemon-reload

# Enable it to start automatically on boot
sudo systemctl enable mlflow

# Start it right now
sudo systemctl start mlflow

If mlflow fails to start, you can check the logs as such:

journalctl -u mlflow -f


Quick MLFlow overview

Getting started: https://mlflow.org/docs/latest/getting-started/logging-first-model/step1-tracking-server.html

This tutorial will explain how to create the following:

Experiments, which are essentially the base of everything. Every training step gets logged under an experiment so that you can visualize the experiment’s loss, accuracy, etc

Experiment logging

When you click on an experiment it shows you all the properties of that experiment, such as the parameters, artifacts (pip requirements, input examples, etc) for the model, etc

Model properties


Here all the artifacts for this experiment are displayed:

Model artifacts

When you are satisfied with an experiment, i.e. there is a specific run you like, you can register a model from it.

This is how you can register a model:

Click on the experiment and top right click on “Register Model”

Register model


This is where you can find your registered models:

Registered models


When you click on your model you can see the various versions. A model can have multiple versions and using MLFlow you can pull a specific model and version):

Model versions


You can then click on a version and “promote “it to e.g. production. Promotion is a maturity thing and the link below talks more about it.

promote model

This shows how to pull models from the registry and talks a lot more about promoting models: https://mlflow.org/docs/latest/model-registry.html


Here is a quick snippet from this URL showing how to use one of your registered models:

How to use a model

necrolingus

Tech enthusiast and home labber