Self-host installation
We offer a self-hosted version of the Tuist server for organizations that require more control over their infrastructure. This version allows you to host Tuist on your own infrastructure, ensuring that your data remains secure and private.
LICENSE REQUIRED
Self-hosting Tuist requires a legally valid paid license. The on-premise version of Tuist is available only for organizations on the Enterprise plan. If you are interested in this version, please reach out to [email protected].
Release cadence
We release new versions of Tuist continuously as new releasable changes land on main. We follow semantic versioning to ensure predictable versioning and compatibility.
The major component is used to flag breaking changes in the Tuist server that will require coordination with the on-premise users. You should not expect us to use it, and in case we needed, rest assured we'll work with you in making the transition smooth.
Continuous deployment
We strongly recommend setting up a continuous deployment pipeline that automatically deploys the latest version of Tuist every day. This ensures you always have access to the latest features, improvements, and security updates.
Here's an example GitHub Actions workflow that checks for and deploys new versions daily:
name: Update Tuist Server
on:
schedule:
- cron: '0 3 * * *' # Run daily at 3 AM UTC
workflow_dispatch: # Allow manual runs
jobs:
update:
runs-on: ubuntu-latest
steps:
- name: Check and deploy latest version
run: |
# Your deployment commands here
# Example: docker pull ghcr.io/tuist/tuist:latest
# Deploy to your infrastructure
Runtime requirements
This section outlines the requirements for hosting the Tuist server on your infrastructure.
Running Docker-virtualized images
We distribute the server as a Docker image via GitHub’s Container Registry.
To run it, your infrastructure must support running Docker images. Note that most infrastructure providers support it because it’s become the standard container for distributing and running software in production environments.
Postgres database
In addition to running the Docker images, you’ll need a Postgres database to store relational data. Most infrastructure providers include Posgres databases in their offering (e.g., AWS & Google Cloud).
For performant analytics, we use a Timescale Postgres extension. You need to make sure that TimescaleDB is installed on the machine running the Postgres database. Follow the installation instructions here to learn more. If you are unable to install the Timescale extension, you can set up your own dashboard using the Prometheus metrics.
MIGRATIONS
The Docker image's entrypoint automatically runs any pending schema migrations before starting the service.
ClickHouse database
To store large amount of data, we are using ClickHouse. Some features, like build insights, will only work with ClickHouse enabled. ClickHouse will eventually replace the Timescale Postgres extension. You can choose whether to self-host ClickHouse or use their hosted service.
MIGRATIONS
The Docker image's entrypoint automatically runs any pending ClickHouse schema migrations before starting the service.
Storage
You’ll also need a solution to store files (e.g. framework and library binaries). Currently we support any storage that's S3-compliant.
Configuration
The configuration of the service is done at runtime through environment variables. Given the sensitive nature of these variables, we advise encrypting and storing them in secure password management solutions. Rest assured, Tuist handles these variables with utmost care, ensuring they are never displayed in logs.
LAUNCH CHECKS
The necessary variables are verified at startup. If any are missing, the launch will fail and the error message will detail the absent variables.
License configuration
As an on-premise user, you'll receive a license key that you'll need to expose as an environment variable. This key is used to validate the license and ensure that the service is running within the terms of the agreement.
Environment variable | Description | Required | Default | Example |
---|---|---|---|---|
TUIST_LICENSE | The license provided after signing the service level agreement | Yes | ****** |
EXPIRATION DATE
Licenses have an expiration date. Users will receive a warning while using Tuist commands that interact with the server if the license expires in less than 30 days. If you are interested in renewing your license, please reach out to [email protected].
Base environment configuration
Environment variable | Description | Required | Default | Example |
---|---|---|---|---|
TUIST_APP_URL | The base URL to access the instance from the Internet | Yes | https://tuist.dev | |
TUIST_SECRET_KEY_BASE | The key to use to encrypt information (e.g. sessions in a cookie) | Yes | ||
TUIST_SECRET_KEY_PASSWORD | Pepper to generate hashed passwords | No | $TUIST_SECRET_KEY_BASE | |
TUIST_SECRET_KEY_TOKENS | Secret key to generate random tokens | No | $TUIST_SECRET_KEY_BASE | |
TUIST_SECRET_KEY_ENCRYPTION | 32-byte key for AES-GCM encryption of sensitive data | No | $TUIST_SECRET_KEY_BASE | |
TUIST_USE_IPV6 | When 1 it configures the app to use IPv6 addresses | No | 0 | 1 |
TUIST_LOG_LEVEL | The log level to use for the app | No | info | Log levels |
TUIST_GITHUB_APP_PRIVATE_KEY | The private key used for the GitHub app to unlock extra functionality such as posting automatic PR comments | No | -----BEGIN RSA... | |
TUIST_OPS_USER_HANDLES | A comma-separated list of user handles that have access to the operations URLs | No | user1,user2 | |
TUIST_WEB | Whether to run the web server component | No | 1 | 1 or 0 |
TUIST_WORKER | Whether to run the background job processing component | No | 1 | 1 or 0 |
WEB SERVER AND BACKGROUND WORKER SEPARATION
By default, both the web server and background job processing run in the same process for simplicity. However, you can separate them by running multiple instances of the Docker image with different configurations:
- Web server only: Set
TUIST_WEB=1
andTUIST_WORKER=0
- Background workers only: Set
TUIST_WEB=0
andTUIST_WORKER=1
This separation allows you to scale web servers and background workers independently based on your workload requirements.
Database configuration
The following environment variables are used to configure the database connection:
Environment variable | Description | Required | Default | Example |
---|---|---|---|---|
DATABASE_URL | The URL to access the Postgres database. Note that the URL should contain the authentication information | Yes | postgres://username:[email protected]/production | |
TUIST_CLICKHOUSE_URL | The URL to access the ClickHouse database. Note that the URL should contain the authentication information | No | http://username:[email protected]/production | |
TUIST_USE_SSL_FOR_DATABASE | When true, it uses SSL to connect to the database | No | 1 | 1 |
TUIST_DATABASE_POOL_SIZE | The number of connections to keep open in the connection pool | No | 10 | 10 |
TUIST_DATABASE_QUEUE_TARGET | The interval (in miliseconds) for checking if all the connections checked out from the pool took more than the queue interval (More information) | No | 300 | 300 |
TUIST_DATABASE_QUEUE_INTERVAL | The threshold time (in miliseconds) in the queue that the pool uses to determine if it should start dropping new connections (More information) | No | 1000 | 1000 |
TUIST_CLICKHOUSE_FLUSH_INTERVAL_MS | Time interval in milliseconds between ClickHouse buffer flushes | No | 5000 | 5000 |
TUIST_CLICKHOUSE_MAX_BUFFER_SIZE | Maximum ClickHouse buffer size in bytes before forcing a flush | No | 1000000 | 1000000 |
TUIST_CLICKHOUSE_BUFFER_POOL_SIZE | Number of ClickHouse buffer processes to run | No | 5 | 5 |
Authentication environment configuration
We facilitate authentication through identity providers (IdP). To utilize this, ensure all necessary environment variables for the chosen provider are present in the server's environment. Missing variables will result in Tuist bypassing that provider.
GitHub
We recommend authenticating using a GitHub App but you can also use the OAuth App. Make sure to include all essential environment variables specified by GitHub in the server environment. Absent variables will cause Tuist to overlook the GitHub authentication. To properly set up the GitHub app:
- In the GitHub app's general settings:
- Copy the
Client ID
and set it asTUIST_GITHUB_APP_CLIENT_ID
- Create and copy a new
client secret
and set it asTUIST_GITHUB_APP_CLIENT_SECRET
- Set the
Callback URL
ashttp://YOUR_APP_URL/users/auth/github/callback
.YOUR_APP_URL
can also be your server's IP address.
- Copy the
- The following permissions are required:
- Repositories:
- Pull requests: Read and write
- Accounts:
- Email addresses: Read-only
- Repositories:
In the Permissions and events
's Account permissions
section, set the Email addresses
permission to Read-only
.
You'll then need to expose the following environment variables in the environment where the Tuist server runs:
Environment variable | Description | Required | Default | Example |
---|---|---|---|---|
TUIST_GITHUB_APP_CLIENT_ID | The client ID of the GitHub application | Yes | Iv1.a629723000043722 | |
TUIST_GITHUB_APP_CLIENT_SECRET | The client secret of the application | Yes | 232f972951033b89799b0fd24566a04d83f44ccc |
Google
You can set up authentication with Google using OAuth 2. For that, you'll need to create a new credential of type OAuth client ID. When creating the credentials, select "Web Application" as application type, name it Tuist
, and set the redirect URI to {base_url}/users/auth/google/callback
where base_url
is the URL your hosted-service is running at. Once you create the app, copy the client ID and secret and set them as environment variables GOOGLE_CLIENT_ID
and GOOGLE_CLIENT_SECRET
respectively.
CONSENT SCREEN SCOPES
You might need to create a consent screen. When you do so, make sure to add the userinfo.email
and openid
scopes and mark the app as internal.
Okta
You can enable authentication with Okta through the OAuth 2.0 protocol. You'll have to create an app on Okta following these instructions.
You will need to set the following environment variables once you obtain the client id and secret during the set up of the Okta application:
Environment variable | Description | Required | Default | Example |
---|---|---|---|---|
TUIST_OKTA_1_CLIENT_ID | The client ID to authenticate against Okta. The number should be your organization ID | Yes | ||
TUIST_OKTA_1_CLIENT_SECRET | The client secret to authenticate against Okta | Yes |
The number 1
needs to be replaced with your organization ID. This will typically be 1, but check in your database.
Storage environment configuration
Tuist needs storage to house artifacts uploaded through the API. It's essential to configure one of the supported storage solutions for Tuist to operate effectively.
S3-compliant storages
You can use any S3-compliant storage provider to store artifacts. The following environment variables are required to authenticate and configure the integration with the storage provider:
Environment variable | Description | Required | Default | Example |
---|---|---|---|---|
TUIST_ACCESS_KEY_ID or AWS_ACCESS_KEY_ID | The access key ID to authenticate against the storage provider | Yes | AKIAIOSFOD | |
TUIST_SECRET_ACCESS_KEY or AWS_SECRET_ACCESS_KEY | The secret access key to authenticate against the storage provider | Yes | wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY | |
TUIST_S3_REGION or AWS_REGION | The region where the bucket is located | Yes | us-west-2 | |
TUIST_S3_ENDPOINT or AWS_ENDPOINT | The endpoint of the storage provider | Yes | https://s3.us-west-2.amazonaws.com | |
TUIST_S3_BUCKET_NAME | The name of the bucket where the artifacts will be stored | Yes | tuist-artifacts | |
TUIST_S3_REQUEST_TIMEOUT | The timeout (in seconds) for requests to the storage provider | No | 30 | 30 |
TUIST_S3_POOL_TIMEOUT | The timeout (in seconds) for the connection pool to the storage provider | No | 5 | 5 |
TUIST_S3_POOL_COUNT | The number of pools to use for connections to the storage provider | No | 1 | 1 |
TUIST_S3_PROTOCOL | The protocol to use when connecting to the storage provider (http1 or http2 ) | No | http2 | http2 |
TUIST_S3_VIRTUAL_HOST | Whether the URL should be constructed with the bucket name as a sub-domain (virtual host). | No | No | 1 |
AWS authentication with Web Identity Token from environment variables
If your storage provider is AWS and you'd like to authenticate using a web identity token, you can set the environment variable TUIST_S3_AUTHENTICATION_METHOD
to aws_web_identity_token_from_env_vars
, and Tuist will use that method using the conventional AWS environment variables.
Google Cloud Storage
For Google Cloud Storage, follow these docs to get the AWS_ACCESS_KEY_ID
and AWS_SECRET_ACCESS_KEY
pair. The AWS_ENDPOINT
should be set to https://storage.googleapis.com
. Other environment variables are the same as for any other S3-compliant storage.
Git platform configuration
Tuist can integrate with Git platforms to provide extra features such as automatically posting comments in your pull requests.
GitHub
You will need to create a GitHub app. You can reuse the one you created for authentication, unless you created an OAuth GitHub app. In the Permissions and events
's Repository permissions
section, you will need to additionally set the Pull requests
permission to Read and write
.
On top of the TUIST_GITHUB_APP_CLIENT_ID
and TUIST_GITHUB_APP_CLIENT_SECRET
, you will need the following environment variables:
Environment variable | Description | Required | Default | Example |
---|---|---|---|---|
TUIST_GITHUB_APP_PRIVATE_KEY | The private key of the GitHub application | Yes | -----BEGIN RSA PRIVATE KEY-----... |
Deployment
The official Tuist Docker image is available at:
ghcr.io/tuist/tuist
Pulling the Docker image
You can retrieve the image by executing the following command:
docker pull ghcr.io/tuist/tuist:latest
Or pull a specific version:
docker pull ghcr.io/tuist/tuist:0.1.0
Deploying the Docker image
The deployment process for the Docker image will differ based on your chosen cloud provider and your organization's continuous deployment approach. Since most cloud solutions and tools, like Kubernetes, utilize Docker images as fundamental units, the examples in this section should align well with your existing setup.
IMPORTANT
If your deployment pipeline needs to validate that the server is up and running, you can send a GET
HTTP request to /ready
and assert a 200
status code in the response.
Fly
To deploy the app on Fly, you'll require a fly.toml
configuration file. Consider generating it dynamically within your Continuous Deployment (CD) pipeline. Below is a reference example for your use:
app = "tuist"
primary_region = "fra"
kill_signal = "SIGINT"
kill_timeout = "5s"
[experimental]
auto_rollback = true
[env]
# Your environment configuration goes here
# Or exposed through Fly secrets
[processes]
app = "/usr/local/bin/hivemind /app/Procfile"
[[services]]
protocol = "tcp"
internal_port = 8080
auto_stop_machines = false
auto_start_machines = false
processes = ["app"]
http_options = { h2_backend = true }
[[services.ports]]
port = 80
handlers = ["http"]
force_https = true
[[services.ports]]
port = 443
handlers = ["tls", "http"]
[services.concurrency]
type = "connections"
hard_limit = 100
soft_limit = 80
[[services.http_checks]]
interval = 10000
grace_period = "10s"
method = "get"
path = "/ready"
protocol = "http"
timeout = 2000
tls_skip_verify = false
[services.http_checks.headers]
[[statics]]
guest_path = "/app/public"
url_prefix = "/"
Then you can run fly launch --local-only --no-deploy
to launch the app. On subsequent deploys, instead of running fly launch --local-only
, you will need to run fly deploy --local-only
. Fly.io doesn't allow to pull private Docker images, which is why we need to use the --local-only
flag.
Docker Compose
Below is an example of a docker-compose.yml
file that you can use as a reference to deploy the service:
version: '3.8'
services:
db:
image: timescale/timescaledb-ha:pg16
restart: always
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
- PGDATA=/var/lib/postgresql/data/pgdata
ports:
- '5432:5432'
volumes:
- db:/var/lib/postgresql/data
healthcheck:
test: ["CMD-SHELL", "pg_isready -U postgres"]
interval: 5s
timeout: 5s
retries: 5
pgweb:
container_name: pgweb
restart: always
image: sosedoff/pgweb
ports:
- "8081:8081"
links:
- db:db
environment:
PGWEB_DATABASE_URL: postgres://postgres:postgres@db:5432/postgres?sslmode=disable
depends_on:
- db
tuist:
image: ghcr.io/tuist/tuist:latest
container_name: tuist
depends_on:
- db
ports:
- "80:80"
- "8080:8080"
- "443:443"
expose:
- "80"
- "8080"
- "443:443"
environment:
# Base Tuist Env - https://docs.tuist.io/en/guides/dashboard/on-premise/install#base-environment-configuration
TUIST_USE_SSL_FOR_DATABASE: "0"
TUIST_LICENSE: # ...
DATABASE_URL: postgres://postgres:postgres@db:5432/postgres?sslmode=disable
TUIST_APP_URL: https://localhost:8080
TUIST_SECRET_KEY_BASE: # ...
WEB_CONCURRENCY: 80
# Auth - one method
# GitHub Auth - https://docs.tuist.io/en/guides/dashboard/on-premise/install#github
TUIST_GITHUB_OAUTH_ID:
TUIST_GITHUB_APP_CLIENT_SECRET:
# Okta Auth - https://docs.tuist.io/en/guides/dashboard/on-premise/install#okta
TUIST_OKTA_SITE:
TUIST_OKTA_CLIENT_ID:
TUIST_OKTA_CLIENT_SECRET:
TUIST_OKTA_AUTHORIZE_URL: # Optional
TUIST_OKTA_TOKEN_URL: # Optional
TUIST_OKTA_USER_INFO_URL: # Optional
TUIST_OKTA_EVENT_HOOK_SECRET: # Optional
# Storage
AWS_ACCESS_KEY_ID: # ...
AWS_SECRET_ACCESS_KEY: # ...
AWS_S3_REGION: # ...
AWS_ENDPOINT: # https://amazonaws.com
TUIST_S3_BUCKET_NAME: # ...
# Other
volumes:
db:
driver: local
Operations
Tuist provides a set of utilities under /ops/
that you can use to manage your instance.
Authorization
Only people whose handles are listed in the TUIST_OPS_USER_HANDLES
environment variable can access the /ops/
endpoints.
- Errors (
/ops/errors
): You can view unexpected errors that ocurred in the application. This is useful for debugging and understanding what went wrong and we might ask you to share this information with us if you're facing issues. - Dashboard (
/ops/dashboard
): You can view a dashboard that provides insights into the application's performance and health (e.g. memory consumption, processes running, number of requests). This dashboard can be quite useful to understand if the hardware you're using is enough to handle the load.