Kafka authentication using SASL/SCRAM

In this blog I will focus more in how to configure Kafka authentication using SASL/SCRAM. The source code can be checked out from this repository

What is SCRAM?

In cryptography, the Salted Challenge Response Authentication Mechanism (SCRAM) is a family of modern, password-based challenge–response authentication mechanisms providing authentication of a user to a server. As it is specified for Simple Authentication and Security Layer (SASL), it can be used for password-based logins to services like SMTP and in our case Kafka.

The following are characteristics of Kafka SASL/SCRAM

  • No password are stored in Kafka. All account credentials are stored in Zookeeper but after being encrypted and salted, so no plaintext passwords are kept.
  • Client(in our case this is Kafka service in each broker) will not send password over the wire to Zookeeper, but instead will try to encrypt the password in the same way as Zookeeper did when creating account for the first time.
  • Client and Zookeeper can verify each other.
  • SCRAM has to be used with SSL/TLS to prevent interception of SCRAM exchange.

For more details please jump to section How SCRAM Works.

SSL/TLS key and trust stores

First we have to create SSL key stores and trust stores for all Kafka and Zookeeper nodes, for more details check SSL section in Confluent and the shell script to create those file .

Configure Zookeeper nodes

Zookeeper still does not support SASL/SCRAM, it instead supports SASL/DIGEST-MD5, we need to configure server-to-server and client-server communications. First we need to configure Kafka OPTS in Zookeeper docker as follows(for full configurations please check docker compose for SCRAM).

KAFKA_OPTS: -Djava.security.auth.login.config=/etc/kafka/secrets/zookeeper_server_jaas.conf
-Dquorum.auth.enableSasl=true
-Dquorum.auth.learnerRequireSasl=true
-Dquorum.auth.serverRequireSasl=true
-Dquorum.cnxn.threads.size=20
-DjaasLoginRenew=3600000
-DrequireClientAuthScheme=sasl
-Dquorum.auth.learner.loginContext=QuorumLearner
-Dquorum.auth.server.loginContext=QuorumServer
-Dzookeeper.authProvider.1=org.apache.zookeeper.server.auth.SASLAuthenticationProvider
-Dzookeeper.authProvider.2=org.apache.zookeeper.server.auth.DigestAuthenticationProvider

Full description about every property in here can be found in server-server Zookeeper authentication.

Server-server mutual authentication

During leader election among the Zookeeper ensemble nodes, the default configurations is not secured(aka authenticated) so any server could be added to the cluster and selected as a leader at some point. The quorum peer servers need to secure the communication between them, one of the approaches to do that is DIGEST-MD5. The following configurations need to be added.

We need to define the credentials that will be used by learners when connecting to the leader. This needs to be defined in the file /etc/kafka/secrets/zookeeper_server_jaas.conf

QuorumServer {
org.apache.zookeeper.server.auth.DigestLoginModule required
user_zookeeper="password";
};
QuorumLearner {
org.apache.zookeeper.server.auth.DigestLoginModule required
username="zookeeper"
password="password";
};

As you can see, the password are stored as plaintext, so we have to secure this file and allow read-only access only to the user which run Zookeeper service.

Client-to-Server authentication

In this part we need to configure the client to server. In our case client is Kafka service in each Kafka broker, and also any user that could be used to connect to Zookeeper, for example when using kafka-configs command to create Kafka accounts.

We need to add Server section in the same Zookeeper JAAS file

Server {
org.apache.zookeeper.server.auth.DigestLoginModule required
user_admin="password"
user_anyother_user="password";
};
  • admin: it is a default user which has administrator privileges, this is the user which Kafka has to be used to connect to Zookeeper

Now, any client wants to connect directly to Zookeeper, needs to use one of the configured user credentials define in the Server section. For example the following script is used to create a user account that will be used to connect to Kafka.

docker run -it --rm -v ${PWD}/zookeeper_client_jaas.conf:/etc/kafka/secrets/zookeeper_client_jaas.conf  \
-e KAFKA_OPTS="-Djava.security.auth.login.config=/etc/kafka/secrets/zookeeper_client_jaas.conf" confluentinc/cp-kafka:5.0.1 \
kafka-configs --zookeeper zookeeper-server:2181 --alter --add-config \'SCRAM-SHA-256=[iterations=4096,password=password]' \
--entity-type users --entity-name metricsreporter//zookeeper_client_jaas.confClient {
org.apache.zookeeper.server.auth.DigestLoginModule required
username="admin"
password="password";
};

Configure Kafka brokers

First, We need to configure all Kafka brokers to support SASL/SCRAM, lets start with docker configuration changes

KAFKA_ADVERTISED_LISTENERS: SASL_SSL://kafka-broker-1:19094
KAFKA_SASL_ENABLED_MECHANISMS: SCRAM-SHA-256
KAFKA_SECURITY_INTER_BROKER_PROTOCOL: SASL_SSL
KAFKA_SASL_MECHANISM_INTER_BROKER_PROTOCOL: SCRAM-SHA-256
CONFLUENT_METRICS_REPORTER_SECURITY_PROTOCOL: SASL_SSL
CONFLUENT_METRICS_REPORTER_SASL_MECHANISM: SCRAM-SHA-256
KAFKA_ZOOKEEPER_SASL_ENABLED: "true"
KAFKA_ALLOW_EVERYONE_IF_NO_ACL_FOUND: "false"
KAFKA_OPTS: -Dzookeeper.sasl.client=true
-Dzookeeper.sasl.clientconfig=Client
-Djava.security.auth.login.config=/etc/kafka/secrets/kafka_jaas.conf

Zookeeper client

For Kafka brokers to connect to Zookeeper, we need to add Client context in the kafka_server_jaas.conf file.

Client {
org.apache.kafka.common.security.plain.PlainLoginModule required
username="admin"
password="password";
};

The next 2 steps are using Kafka account credentials which have to be created in Zookeeper before starting Kafka brokers using the following command

docker run -it --rm  confluentinc/cp-kafka:5.0.1 kafka-configs --zookeeper zookeeper-1:22181 --alter --add-config \
'SCRAM-SHA-256=[iterations=4096,password=password]' --entity-type users --entity-name metricsreporter

docker run -it --rm confluentinc/cp-kafka:5.0.1 kafka-configs --zookeeper zookeeper-1:22181 --alter --add-config \
'SCRAM-SHA-256=[iterations=4096,password=password]' --entity-type users --entity-name kafkabroker

Kafka inter-broker configurations

we need to define the account credentials used to authenticate communications between Kafka brokers. To do so, we need to add the following section to the same Jaas file

KafkaServer {
org.apache.kafka.common.security.scram.ScramLoginModule required
username="kafkabroker"
password="password";
};

Kafka client configurations

For any Kafka clients running inside brokers like metrics reporter, we need to configure KafkaClient context in the Jaas file.

KafkaClient {
org.apache.kafka.common.security.scram.ScramLoginModule required
username="metricsreporter"
password="password";
};

As you can see, the login module here is ScramLoginModule, this means that this user account needs to be added in Zookeeper through kafka-configs command.

Lessons learned

  • In Kafka brokers the user used to connect to Zookeeper has to be named admin.
  • The login module used in the Client context defined in Kafka Jaas file to connect to Zookeeper can not be ScramLoginModule, it has to be plain login which means that the credentials will be saved as plaintext in Zookeeper.
  • All credentials used in Kafka Jaas file with ScramLoginModule have to be created in Zookeeper before Kafka brokers starts. Because of that, I added a service in the compose file to create all required credentials.

How SCRAM works

The next sections talks in-depth how SCRAM authentication works, you do not need to know this to configure Kafka with SASL/SCRAM. This section is copied from MongoDB SCRAM explained

Setup: Account Creation

To create an identity on the server, an administrator create a user account, specifying the plaintext username and password. The server first applies the key derivation function to compute the SaltedPassword.

SaltedPassword = KeyDerive(password, salt, i)
ClientKey = HMAC(SaltedPassword, "Client Key")
StoredKey = H(ClientKey)
ServerKey = HMAC(SaltedPassword, "Server Key")
password: is the plaintext password for the user.
Hash(str): a cryptographic hash function
HMAC(str, key): hash-based message authentication code
KeyDerive(str, salt, i): a key derivation function
i: iteration count, a higher i value increases the cost of a brute-force attack, but also increases the time required for a user to authenticate to the server.
salt: A per-user randomly generated salt to be used during key derivation.

The StoredKey is a cryptographic digest of the ClientKey. The ClientKey is itself a cryptographic digest of the salted password. The key idea of the StoredKey is that it can be used to verify a ClientKey without having to store the ClientKey itself. While, the ServerKey is used by the server to prove its identity to the client. (check Kafka source code ScramFormatter). The server stores the following

  • An iteration count for key derivation(i)
  • A per-user randomly generated salt to be used during key derivation(salt).
  • The StoredKey, used by the server to verify the client’s identity.
  • The ServerKey, used by the server to prove its identity to the client.

Authentication

When the client tries to authenticate with Kafka, it will follow steps like the ones used in MongoDB as follows

SCRAM authentication flow
  • The client sends an authentication request to the server containing username and a random number (called the ClientNonce) used to prevent replay attacks.
  • The server first retrieves the user’s credential, and responds with a message containing a salt, iteration count(i), and the CombinedNonce, a concatenation of the ClientNonce and an additional ServerNonce generated by the server.
  • The client will respond with the ClientProof and the CombinedNonce. The ClientProof allows the client to prove that it has possession of the ClientKey without having to send it over the network.
  • To compute the ClientProof, the client first computes the StoredKey and ClientKey in the same manner as the server when it initially generates the credential while creating account.
  • As the server can compute the ClientSignature using the information stored in its credential database, the bitwise xor with the ClientKey, which is not stored by the server, prevents the server from being able to forge a valid ClientProof.
  • The server verifies the client’s proof, and issues a proof of its own The server now computes the ClientSignature using the StoredKey from the client’s credential.
  • The client verifies the server’s proof by computing the ServerKey and ServerSignature, then comparing its ServerSignature to the one received from the server. If they are the same, the client has proof that the server has access to the ServerKey.

References

0 0 votes
Article Rating
Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments