How to deploy an identity provider¶
An identity provider must be deployed and integrated with Charmed HPC to supply your cluster with user and group information. This guide provides you with different options for how to set up an identity provider for your Charmed HPC cluster.
Follow the instructions in the External LDAP server with SSSD section if you have an existing, external LDAP server that you want to use with your Charmed HPC cluster.
Follow the instructions in the GLAuth with SSSD section if you are experimenting with Charmed HPC or are deploying a small Charmed HPC cluster.
External LDAP server with SSSD¶
This section shows you how to use an external LDAP server as your Charmed HPC cluster’s identity provider, and SSSD as the client for integrating your cluster’s login and compute nodes to the external LDAP server.
The ldap-integrator charm is used to proxy your external LDAP server’s configuration information to other charmed applications.
Prerequisites¶
An active Slurm deployment in your
charmed-hpcmachine cloud.The Juju CLI client installed on your machine.
Deploy ldap-integrator and SSSD¶
You have two options for deploying ldap-integrator and SSSD:
Using the Juju CLI client.
Using the Juju Terraform client.
If you want to use Terraform to deploy ldap-integrator and SSSD, see the
Manage terraform-provider-juju how-to guide for additional
requirements.
Deploy ldap-integrator¶
First, use juju add-model to create the identity model on your
charmed-hpc machine cloud:
juju add-model identity charmed-hpc
Now use juju add-secret to create a secret for your external LDAP server’s bind password.
In this example, the external LDAP server’s bind password is "test":
secret_uri=$(juju add-secret external_ldap_password password="test")
Next, use juju deploy with the --config flag to deploy
ldap-integrator with your external LDAP server’s configuration information. In this
example, the external LDAP server’s:
base_dnis"cn=testing,cn=ubuntu,cn=com".bind_dnis"cn=admin,dc=test,dc=ubuntu,dc=com".bind_passwordis"test".starttlsmode is disabled.urlsare"ldap://10.214.237.229".
For further customization, see the full list of ldap-integrator’s available configuration options.
juju deploy ldap-integrator --channel "edge" \
--config base_dn="cn=testing,cn=ubuntu,cn=com" \
--config bind_dn="cn=admin,dc=test,dc=ubuntu,dc=com" \
--config bind_password="${secret_id}" \
--config starttls=false \
--config urls="ldap://10.214.237.229"
After that, use juju grant-secret to grant the ldap-integrator application
access to your external LDAP server’s bind password:
juju grant-secret external_ldap_password ldap-integrator
First, create the Terraform configuration file
ldap-integrator/main.tf using mkdir and touch:
mkdir ldap-integrator
touch ldap-integrator/main.tf
Now open ldap-integrator/main.tf in a text editor and add the Juju Terraform provider to your configuration:
terraform {
required_providers {
juju = {
source = "juju/juju"
version = "~> 1.0"
}
}
}
Next, create the identity model on your charmed-hpc machine cloud:
resource "juju_model" "identity" {
name = "identity"
cloud {
name = "charmed-hpc"
}
}
Next, create the external_ldap_password secret in the identity model. In this example,
the external LDAP server’s bind password is "test":
resource "juju_secret" "external_ldap_password" {
model_uuid = juju_model.identity.uuid
name = "external_ldap_password"
value = {
password = "test"
}
}
Securely setting the external LDAP server’s bind password in a Juju secret
You can use Terraform’s built-in file function
to read in your bind password from a secure file rather provide
it as plain text in the ldap-integrator/main.tf plan.
Now deploy ldap-integrator. In this example, the external LDAP server’s:
base_dnis"cn=testing,cn=ubuntu,cn=com".bind_dnis"cn=admin,dc=test,dc=ubuntu,dc=com".bind_passwordis"test".starttlsmode is disabled.urlsare"ldap://10.214.237.229".
For further customization, see the full list of ldap-integrator’s available configuration options.
module "ldap_integrator" {
source = "git::https://github.com/canonical/ldap-integrator//terraform"
model_uuid = juju_model.identity.uuid
config = {
base_dn = "cn=testing,cn=ubuntu,cn=com"
bind_dn = "cn=admin,dc=test,dc=ubuntu,dc=com"
bind_password = juju_secret.external_ldap_password.secret_uri
starttls = false
urls = "ldap://10.214.237.229"
}
channel = "latest/edge"
}
Next, grant the ldap-integrator application access to the external_ldap_password secret:
resource "juju_access_secret" "grant_external_ldap_password_secret" {
applications = [module.ldap_integrator.app_name]
model_uuid = juju_model.identity.uuid
secret_id = juju_secret.external_ldap_password.secret_id
}
You can expand the dropdown below to see the full ldap-integrator/main.tf
Terraform configuration file. Now use the terraform command to apply
your configuration:
terraform -chdir=ldap-integrator init
terraform -chdir=ldap-integrator apply -auto-approve
Full ldap-integrator/main.tf Terraform configuration file
1terraform {
2 required_providers {
3 juju = {
4 source = "juju/juju"
5 version = "~> 1.0"
6 }
7 }
8}
9
10resource "juju_model" "identity" {
11 name = "identity"
12 cloud {
13 name = "charmed-hpc"
14 }
15}
16
17resource "juju_secret" "external_ldap_password" {
18 model_uuid = juju_model.identity.uuid
19 name = "external_ldap_password"
20 value = {
21 password = "test"
22 }
23}
24
25module "ldap_integrator" {
26 source = "git::https://github.com/canonical/ldap-integrator//terraform"
27 model_uuid = juju_model.identity.uuid
28
29 config = {
30 base_dn = "cn=testing,cn=ubuntu,cn=com"
31 bind_dn = "cn=admin,dc=test,dc=ubuntu,dc=com"
32 bind_password = juju_secret.external_ldap_password.secret_uri
33 starttls = false
34 urls = "ldap://10.214.237.229"
35 }
36
37 channel = "latest/edge"
38}
39
40resource "juju_access_secret" "grant_external_ldap_password_secret" {
41 applications = [module.ldap_integrator.app_name]
42 model_uuid = juju_model.identity.uuid
43 secret_id = juju_secret.external_ldap_password.secret_id
44}
Your ldap-integrator application will become active within a few minutes. The output
of juju status will be similar to the following:
user@host:~$ juju status
Model Controller Cloud/Region Version SLA Timestamp
identity charmed-hpc-controller charmed-hpc/default 3.6.12 unsupported 17:02:01-05:00
App Version Status Scale Charm Channel Rev Exposed Message
ldap-integrator active 1 ldap-integrator latest/edge 35 no
Unit Workload Agent Machine Public address Ports Message
ldap-integrator/0* active idle 0 10.214.237.205
Machine State Address Inst id Base AZ Message
0 started 10.214.237.205 juju-dade42-0 ubuntu@22.04 Running
You now need to deploy SSSD in your slurm model to enroll your cluster’s
machines with the external LDAP server.
Deploy SSSD¶
First, use juju deploy to deploy SSSD in your slurm model:
juju deploy sssd --base "ubuntu@24.04" --channel "edge"
Now use juju integrate to integrate SSSD with the Slurm services
sackd and slurmd:
juju integrate sssd sackd
juju integrate sssd slurmd
First, create the Terraform configuration file sssd/main.tf
using mkdir and touch:
mkdir sssd
touch sssd/main.tf
Now open sssd/main.tf in a text editor and add the Juju Terraform provider to your configuration:
terraform {
required_providers {
juju = {
source = "juju/juju"
version = "~> 1.0"
}
}
}
Now declare data sources for the slurm model, and your sackd and slurmd applications:
data "juju_model" "slurm" {
name = "slurm"
owner = "admin"
}
data "juju_application" "sackd" {
model_uuid = data.juju_model.slurm.uuid
name = "sackd"
}
data "juju_application" "slurmd" {
model_uuid = data.juju_model.slurm.uuid
name = "slurmd"
}
Now deploy SSSD:
module "sssd" {
source = "git::https://github.com/canonical/sssd-operator//terraform"
model_uuid = data.juju_model.slurm.uuid
}
Now connect SSSD to the sackd and slurmd applications in your slurm model:
resource "juju_integration" "sssd-to-sackd" {
model_uuid = data.juju_model.slurm.uuid
application {
name = module.sssd.app_name
}
application {
name = data.juju_application.sackd.name
}
}
resource "juju_integration" "sssd-to-slurmd" {
model_uuid = data.juju_model.slurm.uuid
application {
name = module.sssd.app_name
}
application {
name = data.juju_application.slurmd.name
}
}
You can expand the dropdown below to see the full sssd/main.tf Terraform configuration file. Now use the terraform command to apply your configuration.
terraform -chdir=sssd init
terraform -chdir=sssd apply -auto-approve
Full sssd/main.tf Terraform configuration file
1terraform {
2 required_providers {
3 juju = {
4 source = "juju/juju"
5 version = "~> 1.0"
6 }
7 }
8}
9
10data "juju_model" "slurm" {
11 name = "slurm"
12 owner = "admin"
13}
14
15data "juju_application" "sackd" {
16 model_uuid = data.juju_model.slurm.uuid
17 name = "sackd"
18}
19
20data "juju_application" "slurmd" {
21 model_uuid = data.juju_model.slurm.uuid
22 name = "slurmd"
23}
24
25module "sssd" {
26 source = "git::https://github.com/canonical/sssd-operator//terraform"
27 model_uuid = data.juju_model.slurm.uuid
28}
29
30resource "juju_integration" "sssd-to-sackd" {
31 model_uuid = data.juju_model.slurm.uuid
32
33 application {
34 name = module.sssd.app_name
35 }
36
37 application {
38 name = data.juju_application.sackd.name
39 }
40}
41
42resource "juju_integration" "sssd-to-slurmd" {
43 model_uuid = data.juju_model.slurm.uuid
44
45 application {
46 name = module.sssd.app_name
47 }
48
49 application {
50 name = data.juju_application.slurmd.name
51 }
52}
Your SSSD application will reach waiting status within a few minutes.
The output of juju status will be similar to the following:
user@host:~$ juju status
Model Controller Cloud/Region Version SLA Timestamp
slurm charmed-hpc-controller charmed-hpc/default 3.6.12 unsupported 16:17:13-04:00
App Version Status Scale Charm Channel Rev Exposed Message
mysql 8.0.39-0ubun... active 1 mysql 8.0/stable 313 no
sackd 23.11.4-1.2u... active 1 sackd latest/edge 13 no
slurmctld 23.11.4-1.2u... active 1 slurmctld latest/edge 95 no
slurmd 23.11.4-1.2u... active 1 slurmd latest/edge 116 no
slurmdbd 23.11.4-1.2u... active 1 slurmdbd latest/edge 87 no
slurmrestd 23.11.4-1.2u... active 1 slurmrestd latest/edge 89 no
sssd 2.9.4-1.1ubu... waiting 2 sssd latest/edge 6 no Waiting for integrations: [`ldap`]
Unit Workload Agent Machine Public address Ports Message
mysql/0* active idle 3 10.175.90.111 3306,33060/tcp Primary
sackd/0* active idle 0 10.175.90.64
sssd/1 waiting idle 10.175.90.64 Waiting for integrations: [`ldap`]
slurmctld/0* active idle 4 10.175.90.100
slurmd/0* active idle 5 10.175.90.107
sssd/0* waiting idle 10.175.90.107 Waiting for integrations: [`ldap`]
slurmdbd/0* active idle 2 10.175.90.105
slurmrestd/0* active idle 1 10.175.90.215
Machine State Address Inst id Base AZ Message
0 started 10.175.90.64 juju-0f356d-0 ubuntu@24.04 Running
1 started 10.175.90.215 juju-0f356d-1 ubuntu@24.04 Running
2 started 10.175.90.105 juju-0f356d-2 ubuntu@24.04 Running
3 started 10.175.90.111 juju-0f356d-3 ubuntu@22.04 Running
4 started 10.175.90.100 juju-0f356d-4 ubuntu@24.04 Running
5 started 10.175.90.107 juju-0f356d-5 ubuntu@24.04 Running
You now need to integrate SSSD with the ldap-integrator application in your identity model so that
the SSSD application can activate and enroll your machines with the external LDAP server.
Integrate SSSD with ldap-integrator¶
First, create an offer from the ldap-integrator application in your identity model
with juju offer:
juju offer identity.ldap-integrator:ldap ldap
Next, use juju consume to consume the offer from your ldap-integrator
application in your slurm model:
juju consume identity.ldap
After that, use juju integrate to integrate SSSD with ldap-integrator:
juju integrate ldap sssd
First, create the Terraform plan connect-sssd-to-ldap-integrator/main.tf
using mkdir and touch:
mkdir integrate-sssd-with-ldap-integrator
touch integrate-sssd-with-ldap-integrator/main.tf
Now open connect-sssd-to-ldap-integrator/main.tf in a text editor and add the Juju Terraform provider to your configuration:
terraform {
required_providers {
juju = {
source = "juju/juju"
version = "~> 1.0"
}
}
}
After that, declare data sources for the identity and slurm models,
and the ldap-integrator and SSSD applications:
data "juju_model" "identity" {
name = "identity"
owner = "admin"
}
data "juju_model" "slurm" {
name = "slurm"
owner = "admin"
}
data "juju_application" "ldap_integrator" {
model_uuid = data.juju_model.identity.uuid
name = "ldap-integrator"
}
data "juju_application" "sssd" {
model_uuid = data.juju_model.slurm.uuid
name = "sssd"
}
Now create an offer from the ldap-integrator application in your identity model:
resource "juju_offer" "ldap" {
model_uuid = data.juju_model.identity.uuid
application_name = data.juju_application.ldap_integrator.name
endpoints = ["ldap"]
name = "ldap"
}
Next, integrate SSSD with ldap-integrator:
resource "juju_integration" "sssd_to_ldap" {
model_uuid = data.juju_model.slurm.uuid
application {
name = data.juju_application.sssd.name
}
application {
offer_url = juju_offer.ldap.url
}
}
You can expand the dropdown below to see the full connect-sssd-to-ldap-integrator/main.tf Terraform
configuration file. Now use the terraform command to apply your configuration.
terraform -chdir=integrate-sssd-with-ldap-integrator init
terraform -chdir=integrate-sssd-with-ldap-integrator apply -auto-approve
Full connect-sssd-to-ldap-integrator/main.tf Terraform configuration file
1terraform {
2 required_providers {
3 juju = {
4 source = "juju/juju"
5 version = "~> 1.0"
6 }
7 }
8}
9
10data "juju_model" "identity" {
11 name = "identity"
12 owner = "admin"
13}
14
15data "juju_model" "slurm" {
16 name = "slurm"
17 owner = "admin"
18}
19
20data "juju_application" "ldap_integrator" {
21 model_uuid = data.juju_model.identity.uuid
22 name = "ldap-integrator"
23}
24
25data "juju_application" "sssd" {
26 model_uuid = data.juju_model.slurm.uuid
27 name = "sssd"
28}
29
30resource "juju_offer" "ldap" {
31 model_uuid = data.juju_model.identity.uuid
32 application_name = data.juju_application.ldap_integrator.name
33 endpoints = ["ldap"]
34 name = "ldap"
35}
36
37resource "juju_integration" "sssd_to_ldap" {
38 model_uuid = data.juju_model.slurm.uuid
39
40 application {
41 name = data.juju_application.sssd.name
42 }
43
44 application {
45 offer_url = juju_offer.ldap.url
46 }
47}
The SSSD application will become active within a few minutes. The output of juju status
will be similar to the following:
user@host:~$ juju status
Model Controller Cloud/Region Version SLA Timestamp
slurm charmed-hpc-controller charmed-hpc/default 3.6.12 unsupported 16:17:13-04:00
SAAS Status Store URL
ldap active local admin/identity.ldap
App Version Status Scale Charm Channel Rev Exposed Message
mysql 8.0.39-0ubun... active 1 mysql 8.0/stable 313 no
sackd 23.11.4-1.2u... active 1 sackd latest/edge 13 no
slurmctld 23.11.4-1.2u... active 1 slurmctld latest/edge 95 no
slurmd 23.11.4-1.2u... active 1 slurmd latest/edge 116 no
slurmdbd 23.11.4-1.2u... active 1 slurmdbd latest/edge 87 no
slurmrestd 23.11.4-1.2u... active 1 slurmrestd latest/edge 89 no
sssd 2.9.4-1.1ubu... active 2 sssd latest/edge 6 no
Unit Workload Agent Machine Public address Ports Message
mysql/0* active idle 3 10.175.90.111 3306,33060/tcp Primary
sackd/0* active idle 0 10.175.90.64
sssd/1 active idle 10.175.90.64
slurmctld/0* active idle 4 10.175.90.100
slurmd/0* active idle 5 10.175.90.107
sssd/0* active idle 10.175.90.107
slurmdbd/0* active idle 2 10.175.90.105
slurmrestd/0* active idle 1 10.175.90.215
Machine State Address Inst id Base AZ Message
0 started 10.175.90.64 juju-0f356d-0 ubuntu@24.04 Running
1 started 10.175.90.215 juju-0f356d-1 ubuntu@24.04 Running
2 started 10.175.90.105 juju-0f356d-2 ubuntu@24.04 Running
3 started 10.175.90.111 juju-0f356d-3 ubuntu@22.04 Running
4 started 10.175.90.100 juju-0f356d-4 ubuntu@24.04 Running
5 started 10.175.90.107 juju-0f356d-5 ubuntu@24.04 Running
Optional: Enable TLS encryption between SSSD and the external LDAP server¶
The manual-tls-certificates charm can provide your SSSD application with your external LDAP server’s TLS certificate.
Before you begin
The instructions in this section assume that your external LDAP server supports TLS and that you have access to your LDAP server’s TLS certificate.
First, use juju deploy with the --config flag to deploy
manual-tls-certificates with your external LDAP server’s TLS certificate. In this
example, the LDAP server’s TLS certificate is stored in the file bundle.pem:
juju deploy manual-tls-certificates \
--model identity \
--config trusted-certificate-bundle="$(cat bundle.pem)"
Tip: How to create a bundle.pem file
You can create your own bundle.pem file with cat. For example,
to create a bundle.pem file that contains both the external LDAP server’s TLS
certificate and the CA certificate that was used to sign the LDAP server’s certificate:
cat ldap-server.crt ca.crt > bundle.pem
Next, create an offer from the manual-tls-certificates application in your identity
model with juju offer:
juju offer identity.manual-tls-certificates:send-ca-certs send-ldap-certs
Now use juju consume to consume the offer from your manual-tls-certificates
application in your slurm model:
juju consume identity.send-ldap-certs
After that, use juju integrate to integrate SSSD with manual-tls-certificates:
juju integrate sssd send-ldap-certs
Now use juju config to update the ldap-integrator application’s configuration
to indicate that the external LDAP server supports TLS:
juju config ldap-integrator starttls=true
First, update the configuration of the ldap-integrator application in the ldap-integrator/main.tf Terraform configuration file to indicate that the external LDAP server supports TLS:
module "ldap-integrator" {
source = "git::https://github.com/canonical/ldap-integrator//terraform"
model_uuid = juju_model.identity.uuid
config = {
base_dn = "cn=testing,cn=ubuntu,cn=com"
bind_dn = "cn=admin,dc=test,dc=ubuntu,dc=com"
bind_password = juju_secret.external_ldap_password.secret_uri
starttls = true
urls = "ldap://10.214.237.229"
}
channel = "latest/edge"
}
Now create the Terraform configuration file manual-tls-certificates/main.tf using
mkdir and touch:
mkdir manual-tls-certificates
touch manual-tls-certificates/main.tf
Now open manual-tls-certificates/main.tf in a text editor and add the Juju Terraform provider to your configuration:
terraform {
required_providers {
juju = {
source = "juju/juju"
version = "~> 1.0"
}
}
}
Next, declare data sources for the identity and slurm models, and the SSSD
application:
data "juju_model" "identity" {
name = "identity"
owner = "admin"
}
data "juju_model" "slurm" {
name = "slurm"
owner = "admin"
}
data "juju_application" "sssd" {
model_uuid = data.juju_model.slurm.uuid
name = "sssd"
}
Now deploy manual-tls-certificates in the identity model. In this
example, the LDAP server’s TLS certificate is stored in the file bundle.pem:
module "manual_tls_certificates" {
source = "git::https://github.com/canonical/manual-tls-certificates-operator//terraform"
model_uuid = data.juju_model.identity.uuid
config = {
trusted-certificate-bundle = file("bundle.pem")
}
}
Tip: How to create a bundle.pem file
You can create your own bundle.pem file with cat. For example,
to create a bundle.pem file that contains both the external LDAP server’s TLS
certificate and the CA certificate that was used to sign the LDAP server’s certificate:
cat ldap-server.crt ca.crt > bundle.pem
Now create an offer from the manual-tls-certificates application in your identity model:
resource "juju_offer" "send_ldap_certs" {
model_uuid = data.juju_model.identity.uuid
application_name = module.manual_tls_certificates.app_name
endpoints = ["trust_certificate"]
name = "send-ldap-certs"
}
After that, integrate SSSD with manual-tls-certificates:
resource "juju_integration" "sssd_to_send_ldap_certs" {
model_uuid = data.juju_model.slurm.uuid
application {
name = data.juju_application.sssd.name
}
application {
offer_url = juju_offer.send_ldap_certs.url
}
}
Now use the terraform command to update the configuration of your
ldap-integrator application:
terraform -chdir=ldap-integrator init
terraform -chdir=ldap-integrator apply -auto-approve
You can expand the dropdown below to see the full manual-tls-certificates/main.tf
Terraform plan before applying it. Now use the terraform command again to
deploy and integrate manual-tls-certificates.
Full manual-tls-certificates/main.tf Terraform configuration file
1terraform {
2 required_providers {
3 juju = {
4 source = "juju/juju"
5 version = "~> 1.0"
6 }
7 }
8}
9
10data "juju_model" "identity" {
11 name = "identity"
12 owner = "admin"
13}
14
15data "juju_model" "slurm" {
16 name = "slurm"
17 owner = "admin"
18}
19
20data "juju_application" "sssd" {
21 model_uuid = data.juju_model.slurm.uuid
22 name = "sssd"
23}
24
25module "manual_tls_certificates" {
26 source = "git::https://github.com/canonical/manual-tls-certificates-operator//terraform"
27 model_uuid = data.juju_model.identity.uuid
28
29 config = {
30 trusted-certificate-bundle = file("bundle.pem")
31 }
32}
33
34resource "juju_offer" "send_ldap_certs" {
35 model_uuid = data.juju_model.identity.uuid
36 application_name = module.manual_tls_certificates.app_name
37 endpoints = ["trust_certificate"]
38 name = "send-ldap-certs"
39}
40
41resource "juju_integration" "sssd_to_send_ldap_certs" {
42 model_uuid = data.juju_model.slurm.uuid
43
44 application {
45 name = data.juju_application.sssd.name
46 }
47
48 application {
49 offer_url = juju_offer.send_ldap_certs.url
50 }
51}
terraform -chdir=manual-tls-certificates init
terraform -chdir=manual-tls-certificates apply -auto-approve
SSSD will reactivate within a few minutes. You will see that the offer
send-ldap-certs is now active in the output of juju status:
user@host:~$ juju status
Model Controller Cloud/Region Version SLA Timestamp
slurm charmed-hpc-controller charmed-hpc/default 3.6.12 unsupported 16:17:13-04:00
SAAS Status Store URL
ldap active local admin/identity.ldap
send-ldap-certs active local admin/identity.send-ldap-certs
App Version Status Scale Charm Channel Rev Exposed Message
mysql 8.0.39-0ubun... active 1 mysql 8.0/stable 313 no
sackd 23.11.4-1.2u... active 1 sackd latest/edge 13 no
slurmctld 23.11.4-1.2u... active 1 slurmctld latest/edge 95 no
slurmd 23.11.4-1.2u... active 1 slurmd latest/edge 116 no
slurmdbd 23.11.4-1.2u... active 1 slurmdbd latest/edge 87 no
slurmrestd 23.11.4-1.2u... active 1 slurmrestd latest/edge 89 no
sssd 2.9.4-1.1ubu... active 2 sssd latest/edge 6 no
Unit Workload Agent Machine Public address Ports Message
mysql/0* active idle 3 10.175.90.111 3306,33060/tcp Primary
sackd/0* active idle 0 10.175.90.64
sssd/1 active idle 10.175.90.64
slurmctld/0* active idle 4 10.175.90.100
slurmd/0* active idle 5 10.175.90.107
sssd/0* active idle 10.175.90.107
slurmdbd/0* active idle 2 10.175.90.105
slurmrestd/0* active idle 1 10.175.90.215
Machine State Address Inst id Base AZ Message
0 started 10.175.90.64 juju-0f356d-0 ubuntu@24.04 Running
1 started 10.175.90.215 juju-0f356d-1 ubuntu@24.04 Running
2 started 10.175.90.105 juju-0f356d-2 ubuntu@24.04 Running
3 started 10.175.90.111 juju-0f356d-3 ubuntu@22.04 Running
4 started 10.175.90.100 juju-0f356d-4 ubuntu@24.04 Running
5 started 10.175.90.107 juju-0f356d-5 ubuntu@24.04 Running
Next Steps¶
You can now use your external LDAP server as the identity provider for your Charmed HPC cluster.
You can also start exploring the Integrate section if you have completed the How to deploy a shared filesystem how-to.
GLAuth with SSSD¶
This section shows you how to use GLAuth, a lightweight LDAP server, as your Charmed HPC cluster’s identity provider, and SSSD as the client for integrating your cluster’s login and compute nodes to the GLAuth server.
Unfamiliar with GLAuth?
If you’re unfamiliar with operating GLAuth, see the GLAuth quick start guide for a high-level introduction to GLAuth.
Using GLAuth in a production Charmed HPC cluster
GLAuth is a lightweight LDAP server that is intended to be used for development or home use. You should only use GLAuth as the identity provider for your cluster if you are experimenting with Charmed HPC or deploying a small cluster.
You should deploy a dedicated LDAP server and follow the instructions in the External LDAP server with SSSD section instead if you are looking to deploy a production-grade Charmed HPC cluster instead.
Prerequisites¶
An active Slurm deployment in your
charmed-hpcmachine cloud.An initialized
charmed-hpc-k8sKubernetes cloud.The Juju CLI client installed on your machine.
Deploy GLAuth and SSSD¶
You have two options for deploying GLAuth and SSSD:
Using the Juju CLI client.
Using the Juju Terraform client.
If you want to use Terraform to deploy GLAuth and SSSD, see the
Manage terraform-provider-juju how-to guide for additional
requirements.
Deploy GLAuth¶
First, use juju add-model to create the identity model in your
charmed-hpc-k8s Kubernetes cloud:
juju add-model identity charmed-hpc-k8s
Now use juju deploy to deploy GLAuth with:
Postgres as GLAuth’s back-end database.
Traefik as GLAuth’s ingress provider.
self-signed-certificates as GLAuth’s X.509 certificates provider.
juju deploy glauth-k8s --channel "edge" \
--config anonymousdse_enabled=true \
--trust
juju deploy postgresql-k8s --channel "14/stable" --trust
juju deploy self-signed-certificates
juju deploy traefik-k8s --trust
GLAuth configuration requirement
GLAuth must have the anonymousdse_enabled configuration option set to
true so that SSSD can anonymously inspect the GLAuth server’s root directory
server agent service entry (RootDSE) before binding to the GLAuth server.
If anonymousdse_enabled is not set to true, SSSD will fail to bind to
the GLAuth server as GLAuth will disallow unauthenticated clients from inspecting
its RootDSE.
Next, use juju integrate to integrate GLAuth with Postgres, Traefik, and
self-signed-certificates:
juju integrate glauth-k8s postgresql-k8s
juju integrate glauth-k8s self-signed-certificates
juju integrate glauth-k8s:ingress traefik-k8s
First, create the Terraform configuration file glauth/main.tf
using mkdir and touch:
mkdir glauth
touch glauth/main.tf
Now open glauth/main.tf in a text editor and add the Juju Terraform provider to your configuration:
terraform {
required_providers {
juju = {
source = "juju/juju"
version = "~> 1.0"
}
}
}
Next, create the identity model on your charmed-hpc-k8s Kubernetes cloud:
resource "juju_model" "identity" {
name = "identity"
credential = "charmed-hpc-k8s"
cloud {
name = "charmed-hpc-k8s"
}
}
Now deploy GLAuth with:
Postgres as GLAuth’s back-end database.
Traefik as GLAuth’s Kubernetes ingress provider.
self-signed-certificates as GLAuth’s X.509 certificates provider.
module "glauth_k8s" {
source = "git::https://github.com/canonical/glauth-k8s-operator//terraform"
model_name = juju_model.identity.name
config = {
anonymousdse_enabled = true
}
channel = "latest/edge"
}
module "postgresql_k8s" {
source = "git::https://github.com/canonical/postgresql-k8s-operator//terraform"
juju_model_name = juju_model.identity.name
}
module "self_signed_certificates" {
source = "git::https://github.com/canonical/self-signed-certificates-operator//terraform"
model_uuid = juju_model.identity.uuid
}
module "traefik_k8s" {
source = "git::https://github.com/canonical/traefik-k8s-operator//terraform"
model_uuid = juju_model.identity.uuid
app_name = "traefik-k8s"
channel = "latest/stable"
}
GLAuth configuration requirement
GLAuth must have the anonymousdse_enabled configuration option set to
true so that SSSD can anonymously inspect the GLAuth server’s root directory
server agent service entry (RootDSE) before binding to the GLAuth server.
If anonymousdse_enabled is not set to true, SSSD will fail to bind to
the GLAuth server as GLAuth will disallow unauthenticated clients from inspecting
its RootDSE.
Next, integrate GLAuth with Postgres, Traefik, and self-signed-certificates:
model_uuid = juju_model.identity.uuid
application {
name = module.glauth_k8s.app_name
}
application {
name = module.postgresql_k8s.application_name
}
}
resource "juju_integration" "glauth_k8s_to_self_signed_certificates" {
model = juju_model.identity.name
application {
name = module.glauth_k8s.app_name
}
application {
name = module.self_signed_certificates.app_name
}
}
resource "juju_integration" "glauth_k8s_to_traefik_k8s" {
model = juju_model.identity.name
application {
name = module.glauth_k8s.app_name
endpoint = module.glauth_k8s.requires.ingress
}
application {
name = module.traefik_k8s.app_name
endpoint = module.traefik_k8s.endpoints.ingress_per_unit
}
}
You can expand the dropdown below to see the full glauth/main.tf
Terraform configuration file before applying it. Now use the terraform
command to apply your configuration:
terraform -chdir=glauth init
terraform -chdir=glauth apply -auto-approve
Full glauth/main.tf Terraform configuration file
1terraform {
2 required_providers {
3 juju = {
4 source = "juju/juju"
5 version = "~> 1.0"
6 }
7 }
8}
9
10resource "juju_model" "identity" {
11 name = "identity"
12 credential = "charmed-hpc-k8s"
13 cloud {
14 name = "charmed-hpc-k8s"
15 }
16}
17
18module "glauth_k8s" {
19 source = "git::https://github.com/canonical/glauth-k8s-operator//terraform"
20 model_name = juju_model.identity.name
21 config = {
22 anonymousdse_enabled = true
23 }
24 channel = "latest/edge"
25}
26
27module "postgresql_k8s" {
28 source = "git::https://github.com/canonical/postgresql-k8s-operator//terraform"
29 juju_model_name = juju_model.identity.name
30}
31
32module "self_signed_certificates" {
33 source = "git::https://github.com/canonical/self-signed-certificates-operator//terraform"
34 model_uuid = juju_model.identity.uuid
35}
36
37module "traefik_k8s" {
38 source = "git::https://github.com/canonical/traefik-k8s-operator//terraform"
39 model_uuid = juju_model.identity.uuid
40 app_name = "traefik-k8s"
41 channel = "latest/stable"
42}
43
44resource "juju_integration" "glauth_k8s_to_postgresql_k8s" {
45 model_uuid = juju_model.identity.uuid
46
47 application {
48 name = module.glauth_k8s.app_name
49 }
50
51 application {
52 name = module.postgresql_k8s.application_name
53 }
54}
55
56resource "juju_integration" "glauth_k8s_to_self_signed_certificates" {
57 model = juju_model.identity.name
58
59 application {
60 name = module.glauth_k8s.app_name
61 }
62
63 application {
64 name = module.self_signed_certificates.app_name
65 }
66}
67
68resource "juju_integration" "glauth_k8s_to_traefik_k8s" {
69 model = juju_model.identity.name
70
71 application {
72 name = module.glauth_k8s.app_name
73 endpoint = module.glauth_k8s.requires.ingress
74 }
75
76 application {
77 name = module.traefik_k8s.app_name
78 endpoint = module.traefik_k8s.endpoints.ingress_per_unit
79 }
80}
Your GLAuth deployment will become active within a few minutes. The output
of juju status will be similar to the following:
user@host:~$ juju status
Model Controller Cloud/Region Version SLA Timestamp
identity charmed-hpc-controller charmed-hpc-k8s/default 3.6.4 unsupported 14:24:50-04:00
App Version Status Scale Charm Channel Rev Address Exposed Message
glauth-k8s active 1 glauth-k8s latest/edge 52 10.152.183.159 no
postgresql-k8s 14.15 active 1 postgresql-k8s 14/stable 495 10.152.183.236 no
self-signed-certificates active 1 self-signed-certificates latest/stable 264 10.152.183.57 no
traefik-k8s 2.11.0 active 1 traefik-k8s latest/stable 232 10.152.183.122 no Serving at 10.175.90.230
Unit Workload Agent Address Ports Message
glauth-k8s/0* active idle 10.1.0.165
postgresql-k8s/0* active idle 10.1.0.45 Primary
self-signed-certificates/0* active idle 10.1.0.128
traefik-k8s/0* active idle 10.1.0.73 Serving at 10.175.90.230
You now need to deploy SSSD in your slurm model to enroll your cluster’s machines with the GLAuth server.
Deploy SSSD¶
First, use juju deploy to deploy SSSD in your slurm model:
juju deploy sssd --base "ubuntu@24.04" --channel "edge"
Now use juju integrate to integrate SSSD with the Slurm services
sackd and slurmd:
juju integrate sssd sackd
juju integrate sssd slurmd
First, create the Terraform configuration file sssd/main.tf
using mkdir and touch:
mkdir sssd
touch sssd/main.tf
Now open sssd/main.tf in a text editor and add the Juju Terraform provider to your configuration:
terraform {
required_providers {
juju = {
source = "juju/juju"
version = "~> 1.0"
}
}
}
Now declare data sources for the slurm model, and your sackd and slurmd applications:
data "juju_model" "slurm" {
name = "slurm"
owner = "admin"
}
data "juju_application" "sackd" {
model_uuid = data.juju_model.slurm.uuid
name = "sackd"
}
data "juju_application" "slurmd" {
model_uuid = data.juju_model.slurm.uuid
name = "slurmd"
}
Now deploy SSSD:
module "sssd" {
source = "git::https://github.com/canonical/sssd-operator//terraform"
model_uuid = data.juju_model.slurm.uuid
}
Now connect SSSD to the sackd and slurmd applications in your slurm model:
resource "juju_integration" "sssd-to-sackd" {
model_uuid = data.juju_model.slurm.uuid
application {
name = module.sssd.app_name
}
application {
name = data.juju_application.sackd.name
}
}
resource "juju_integration" "sssd-to-slurmd" {
model_uuid = data.juju_model.slurm.uuid
application {
name = module.sssd.app_name
}
application {
name = data.juju_application.slurmd.name
}
}
You can expand the dropdown below to see the full sssd/main.tf Terraform configuration file. Now use the terraform command to apply your configuration.
terraform -chdir=sssd init
terraform -chdir=sssd apply -auto-approve
Full sssd/main.tf Terraform configuration file
1terraform {
2 required_providers {
3 juju = {
4 source = "juju/juju"
5 version = "~> 1.0"
6 }
7 }
8}
9
10data "juju_model" "slurm" {
11 name = "slurm"
12 owner = "admin"
13}
14
15data "juju_application" "sackd" {
16 model_uuid = data.juju_model.slurm.uuid
17 name = "sackd"
18}
19
20data "juju_application" "slurmd" {
21 model_uuid = data.juju_model.slurm.uuid
22 name = "slurmd"
23}
24
25module "sssd" {
26 source = "git::https://github.com/canonical/sssd-operator//terraform"
27 model_uuid = data.juju_model.slurm.uuid
28}
29
30resource "juju_integration" "sssd-to-sackd" {
31 model_uuid = data.juju_model.slurm.uuid
32
33 application {
34 name = module.sssd.app_name
35 }
36
37 application {
38 name = data.juju_application.sackd.name
39 }
40}
41
42resource "juju_integration" "sssd-to-slurmd" {
43 model_uuid = data.juju_model.slurm.uuid
44
45 application {
46 name = module.sssd.app_name
47 }
48
49 application {
50 name = data.juju_application.slurmd.name
51 }
52}
Your SSSD application will reach waiting status within a few minutes.
The output of juju status will be similar to the following:
user@host:~$ juju status
Model Controller Cloud/Region Version SLA Timestamp
slurm charmed-hpc-controller charmed-hpc/default 3.6.12 unsupported 16:17:13-04:00
App Version Status Scale Charm Channel Rev Exposed Message
mysql 8.0.39-0ubun... active 1 mysql 8.0/stable 313 no
sackd 23.11.4-1.2u... active 1 sackd latest/edge 13 no
slurmctld 23.11.4-1.2u... active 1 slurmctld latest/edge 95 no
slurmd 23.11.4-1.2u... active 1 slurmd latest/edge 116 no
slurmdbd 23.11.4-1.2u... active 1 slurmdbd latest/edge 87 no
slurmrestd 23.11.4-1.2u... active 1 slurmrestd latest/edge 89 no
sssd 2.9.4-1.1ubu... waiting 2 sssd latest/edge 6 no Waiting for integrations: [`ldap`]
Unit Workload Agent Machine Public address Ports Message
mysql/0* active idle 3 10.175.90.111 3306,33060/tcp Primary
sackd/0* active idle 0 10.175.90.64
sssd/1 waiting idle 10.175.90.64 Waiting for integrations: [`ldap`]
slurmctld/0* active idle 4 10.175.90.100
slurmd/0* active idle 5 10.175.90.107
sssd/0* waiting idle 10.175.90.107 Waiting for integrations: [`ldap`]
slurmdbd/0* active idle 2 10.175.90.105
slurmrestd/0* active idle 1 10.175.90.215
Machine State Address Inst id Base AZ Message
0 started 10.175.90.64 juju-0f356d-0 ubuntu@24.04 Running
1 started 10.175.90.215 juju-0f356d-1 ubuntu@24.04 Running
2 started 10.175.90.105 juju-0f356d-2 ubuntu@24.04 Running
3 started 10.175.90.111 juju-0f356d-3 ubuntu@22.04 Running
4 started 10.175.90.100 juju-0f356d-4 ubuntu@24.04 Running
5 started 10.175.90.107 juju-0f356d-5 ubuntu@24.04 Running
You now need to integrate SSSD with the GLAuth application in your identity model so that
the SSSD application can activate and enroll your machines with the GLAuth server.
Integrate SSSD with GLAuth¶
First, create offers for GLAuth in your identity model using juju offer:
juju offer identity.glauth-k8s:ldap ldap
juju offer identity.glauth-k8s:send-ca-cert send-ldap-certs
Next, use juju consume to consume offers from your GLAuth application
in your slurm model:
juju consume identity.ldap
juju consume identity.send-ldap-certs
After that, use juju integrate to integrate SSSD with GLAuth:
juju integrate sssd ldap
juju integrate sssd send-ldap-certs
The SSSD application will become active within a few minutes. The output of juju status
will be similar to the following:
user@host:~$ juju status
Model Controller Cloud/Region Version SLA Timestamp
slurm charmed-hpc-controller charmed-hpc/default 3.6.12 unsupported 16:17:13-04:00
SAAS Status Store URL
ldap active local admin/identity.ldap
send-ldap-certs active local admin/identity.send-ldap-certs
App Version Status Scale Charm Channel Rev Exposed Message
mysql 8.0.39-0ubun... active 1 mysql 8.0/stable 313 no
sackd 23.11.4-1.2u... active 1 sackd latest/edge 13 no
slurmctld 23.11.4-1.2u... active 1 slurmctld latest/edge 95 no
slurmd 23.11.4-1.2u... active 1 slurmd latest/edge 116 no
slurmdbd 23.11.4-1.2u... active 1 slurmdbd latest/edge 87 no
slurmrestd 23.11.4-1.2u... active 1 slurmrestd latest/edge 89 no
sssd 2.9.4-1.1ubu... active 2 sssd latest/edge 6 no
Unit Workload Agent Machine Public address Ports Message
mysql/0* active idle 3 10.175.90.111 3306,33060/tcp Primary
sackd/0* active idle 0 10.175.90.64
sssd/1 active idle 10.175.90.64
slurmctld/0* active idle 4 10.175.90.100
slurmd/0* active idle 5 10.175.90.107
sssd/0* active idle 10.175.90.107
slurmdbd/0* active idle 2 10.175.90.105
slurmrestd/0* active idle 1 10.175.90.215
Machine State Address Inst id Base AZ Message
0 started 10.175.90.64 juju-0f356d-0 ubuntu@24.04 Running
1 started 10.175.90.215 juju-0f356d-1 ubuntu@24.04 Running
2 started 10.175.90.105 juju-0f356d-2 ubuntu@24.04 Running
3 started 10.175.90.111 juju-0f356d-3 ubuntu@22.04 Running
4 started 10.175.90.100 juju-0f356d-4 ubuntu@24.04 Running
5 started 10.175.90.107 juju-0f356d-5 ubuntu@24.04 Running
First, create the Terraform configuration file connect-sssd-to-glauth/main.tf
using mkdir and touch:
mkdir integrate-sssd-with-glauth
touch integrate-sssd-with-glauth/main.tf
Now open connect-sssd-to-glauth/main.tf in a text editor and add the Juju Terraform provider to your configuration:
terraform {
required_providers {
juju = {
source = "juju/juju"
version = "~> 1.0"
}
}
}
Next, declare data sources for the identity and slurm models, and the GLAuth and SSSD
applications:
data "juju_model" "identity" {
name = "identity"
owner = "admin"
}
data "juju_model" "slurm" {
name = "slurm"
owner = "admin"
}
data "juju_application" "glauth_k8s" {
model_uuid = data.juju_model.identity.uuid
name = "glauth-k8s"
}
data "juju_application" "sssd" {
model_uuid = data.juju_model.slurm.uuid
name = "sssd"
}
Now create offers from the GLAuth application in your identity model:
resource "juju_offer" "ldap" {
model_uuid = data.juju_model.identity.uuid
application_name = data.juju_application.glauth_k8s.name
endpoints = ["ldap"]
name = "ldap"
}
resource "juju_offer" "send_ldap_certs" {
model_uuid = data.juju_model.identity.uuid
application_name = data.juju_application.glauth_k8s.name
endpoints = ["send-ca-certs"]
name = "send-ldap-certs"
}
After that, integrate SSSD with GLAuth:
resource "juju_integration" "sssd_to_ldap" {
model_uuid = data.juju_model.slurm.uuid
application {
name = data.juju_application.sssd.name
}
application {
offer_url = juju_offer.ldap.url
}
}
resource "juju_integration" "sssd_to_send_ldap_certs" {
model_uuid = data.juju_model.slurm.uuid
application {
name = data.juju_application.sssd.name
}
application {
offer_url = juju_offer.send_ldap_certs.url
}
}
You can expand the dropdown below to see the full connect-sssd-to-glauth/main.tf
Terraform configuration file before applying it. Now use the terraform command to
apply your configuration:
terraform -chdir=integrate-sssd-with-glauth init
terraform -chdir=integrate-sssd-with-glauth apply -auto-approve
Full connect-sssd-to-glauth/main.tf Terraform configuration file
terraform {
required_providers {
juju = {
source = "juju/juju"
version = "~> 1.0"
}
}
}
data "juju_model" "identity" {
name = "identity"
owner = "admin"
}
data "juju_model" "slurm" {
name = "slurm"
owner = "admin"
}
data "juju_application" "glauth_k8s" {
model_uuid = data.juju_model.identity.uuid
name = "glauth-k8s"
}
data "juju_application" "sssd" {
model_uuid = data.juju_model.slurm.uuid
name = "sssd"
}
resource "juju_offer" "ldap" {
model_uuid = data.juju_model.identity.uuid
application_name = data.juju_application.glauth_k8s.name
endpoints = ["ldap"]
name = "ldap"
}
resource "juju_offer" "send_ldap_certs" {
model_uuid = data.juju_model.identity.uuid
application_name = data.juju_application.glauth_k8s.name
endpoints = ["send-ca-certs"]
name = "send-ldap-certs"
}
resource "juju_integration" "sssd_to_ldap" {
model_uuid = data.juju_model.slurm.uuid
application {
name = data.juju_application.sssd.name
}
application {
offer_url = juju_offer.ldap.url
}
}
resource "juju_integration" "sssd_to_send_ldap_certs" {
model_uuid = data.juju_model.slurm.uuid
application {
name = data.juju_application.sssd.name
}
application {
offer_url = juju_offer.send_ldap_certs.url
}
}
The SSSD application will become active within a few minutes. The output of juju status
will be similar to the following:
user@host:~$ juju status
Model Controller Cloud/Region Version SLA Timestamp
slurm charmed-hpc-controller charmed-hpc/default 3.6.12 unsupported 16:17:13-04:00
SAAS Status Store URL
ldap active local admin/identity.ldap
send-ldap-certs active local admin/identity.send-ldap-certs
App Version Status Scale Charm Channel Rev Exposed Message
mysql 8.0.39-0ubun... active 1 mysql 8.0/stable 313 no
sackd 23.11.4-1.2u... active 1 sackd latest/edge 13 no
slurmctld 23.11.4-1.2u... active 1 slurmctld latest/edge 95 no
slurmd 23.11.4-1.2u... active 1 slurmd latest/edge 116 no
slurmdbd 23.11.4-1.2u... active 1 slurmdbd latest/edge 87 no
slurmrestd 23.11.4-1.2u... active 1 slurmrestd latest/edge 89 no
sssd 2.9.4-1.1ubu... active 2 sssd latest/edge 6 no
Unit Workload Agent Machine Public address Ports Message
mysql/0* active idle 3 10.175.90.111 3306,33060/tcp Primary
sackd/0* active idle 0 10.175.90.64
sssd/1 active idle 10.175.90.64
slurmctld/0* active idle 4 10.175.90.100
slurmd/0* active idle 5 10.175.90.107
sssd/0* active idle 10.175.90.107
slurmdbd/0* active idle 2 10.175.90.105
slurmrestd/0* active idle 1 10.175.90.215
Machine State Address Inst id Base AZ Message
0 started 10.175.90.64 juju-0f356d-0 ubuntu@24.04 Running
1 started 10.175.90.215 juju-0f356d-1 ubuntu@24.04 Running
2 started 10.175.90.105 juju-0f356d-2 ubuntu@24.04 Running
3 started 10.175.90.111 juju-0f356d-3 ubuntu@22.04 Running
4 started 10.175.90.100 juju-0f356d-4 ubuntu@24.04 Running
5 started 10.175.90.107 juju-0f356d-5 ubuntu@24.04 Running
Next Steps¶
You can now use GLAuth as the identity provider for your Charmed HPC cluster.
Explore GLAuth’s Database documentation for more information on how to use SQL queries to manage your cluster’s users and groups in your Postgres database.
You can also start exploring the Integrate section if you have completed the How to deploy a shared filesystem how-to.