Project Metamorphosis: Unveiling the next-gen event streaming platform. Learn More
As customers across verticals like finance, healthcare, state and local government and education adopt Confluent Platform for mission-critical data, security becomes more and more important. In the latest release of our streaming platform, Confluent Platform 5.0, we introduced several security enhancements to help you solve your most difficult security problems. In this article, I will provide deeper technical analysis about the most important security enhancements that are part of the Confluent Platform 5.0 release, including those from Apache Kafka® 2.0.
Over the past several quarters, we have made major security enhancements to Confluent Platform, which have helped many of you safeguard your business-critical applications. With the latest release, we increased the robustness of our security feature set to help with:
The following new security features are available in both Confluent Platform 5.0 and Apache Kafka 2.0:
The following security features are available only in Confluent Platform 5.0:
Let’s walk through each of these enhancements in detail.
The majority of enterprises standardize on AD for identity-related services. Most people who use directory services would also like to be able to use groups for configuring access controls. In the latest release, we added the capability to use AD and LDAP groups for just this purpose. This simplifies access control management, requiring fewer rules for managing access to groups or organizations. The benefits are:
kafka-acls
and AdminClient.DENY
rule for any of the groups that a principal belongs to will deny access to a resource.ALLOW
rule for any of a principal’s groups will allow access if no DENY
rule matches the resource.ldap.authorizer.refresh.interval.ms
, enabling the user and group information to keep up to date with the directory server.ldap.authorizer.group.search.filter
configuration parameter. If you have many users or groups in AD/LDAP, this can be a lifesaver.You can learn more by checking out the Confluent LDAP Authorizer documentation.
Confluent Control Center now gives you the ability to compare the configuration of different brokers in a Kafka cluster and identify any differences between them. This is a great way to see if you have misconfigured brokers that could be contributing to security vulnerabilities.
Enterprises need to ensure that end users cannot access sensitive data via Control Center for security and compliance reasons. In order to let administrators control application-wide access to features that reveal topic data, we added the capability to control access to topic inspection, schemas and KSQL queries. When you restrict access to one of these features through the configuration file, Control Center’s user interface will reflect this change upon startup.
ACLs do a great job of controlling access to resources like topics and consumer groups, but they can sometimes lack flexibility like any other kind of sophisticated pattern matching. A resource name in an ACL definition is either a complete resource name or a wildcard that matches everything—no middle ground exists. Oftentimes, you prefer more flexibility in setting up ACLs so you don’t end up with a huge list of them. In Confluent Platform 5.0 and Apache Kafka 2.0, we extended the concept of wildcard ACL (*
) to support prefixed ACLs. For example:
FinanceUser
has access to all topics that start with Finance_
UserN
has access to all consumer groups that start with FraudDetection_
Today, Kafka Connect keeps configuration information in clear text, including source and sink credentials. This is not ideal from a security standpoint. At the same time, there are third-party secret stores (like HashiCorp Vault) that provide the ability to protect these passwords at rest and in flight, while also maintaining audit logs, versioning and providing high availability.
Apache Kafka 2.0 and Confluent Platform 5.0 include a new framework in Kafka Connect that lets you integrate your preferred secret store and then use placeholders for secrets in the connector configurations. Kafka Connect will keep the placeholders in its stored configurations and in all REST requests and responses. It will automatically use your plugin to resolve the placeholders into secrets immediately before the configuration is passed to the connector upon startup.
Please note that at this point we do not provide built-in secret store plugins—this feature introduces the framework, not the plugins themselves—but you are now able to implement your own.
All of the enhancements we’ve made across the platform in 5.0 have enabled us to make further security improvements in KSQL and Kafka Streams applications.
The combination of ACL wildcards, support for AD/LDAP groups and the new configuration setup simplify the security configuration for KSQL and Kafka Streams, which is very helpful particularly for teams who are deploying and operating KSQL and Kafka Streams applications in secured, multi-tenant production environments.
This enables developers faster time to production by streamlining the process of setting access control on source, internal and sink topics.
As always, we are happy to hear your feedback. Please post your questions and suggestions to the public Confluent Platform mailing list, join our community Slack channel or contact us directly!
If you’d like to know more, here are some resources for you:
Subscribe to the Confluent blog