TMCnet News

New Confluent Features Make It Easier and Faster to Connect, Process, and Share Trusted Data, Everywhere
[May 16, 2023]

New Confluent Features Make It Easier and Faster to Connect, Process, and Share Trusted Data, Everywhere


Confluent, Inc. (NASDAQ: CFLT), the data streaming pioneer, today announced new Confluent Cloud capabilities that give customers confidence that their data is trustworthy and can be easily processed and securely shared. With Data Quality Rules, an expansion of the Stream Governance suite, organizations can easily resolve data quality issues so data can be relied on for making business-critical decisions. In addition, Confluent's new Custom Connectors, Stream Sharing, the Kora Engine, and early access program for managed Apache Flink make it easier for companies to gain insights from their data on one platform, reducing operational burdens and ensuring industry-leading performance.

"Real-time data is the lifeblood of every organization, but it's extremely challenging to manage data coming from different sources in real time and guarantee that it's trustworthy," said Shaun Clowes, Chief Product Officer at Confluent. "As a result, many organizations build a patchwork of solutions plagued with silos and business inefficiencies. Confluent Cloud's new capabilities fix these issues by providing an easy path to ensuring trusted data can be shared with the right people in the right formats."

Having high-quality data that can be quickly shared between teams, customers, and partners helps businesses make decisions faster. However, this is a challenge many companies face when dealing with highly distributed open source infrastructure like Apache Kafka. According to Confluent's new 2023 Data Streaming Report, 72% of IT leaders cite the inconsistent use of integration methods and standards as a challenge or major hurdle to their data streaming infrastructure. Today's announcement addresses these challenges with the following capabilities:

Data Quality Rules bolsters Confluent's Stream Governance suite to further ensure trustworthy data

Data contracts are formal agreements between upstream and downstream components around the structure and semantics of data that is in motion. One critical component of enforcing data contracts is rules or policies that ensure data streams are high-quality, fit for consumption, and resilient to schema evolution over time.

To address the need for more comprehensive data contracts, Confluent's Data Quality Rules, a new feature in Stream Governance, enable organizations to deliver trusted, high-quality data streams across the organization using customizable rules that ensure data integrity and compatibility. With Data Quality Rules, schemas stored in Schema Registry can now be augmented with several types of rules so teams can:

  • Ensure high data integrity by validating and constraining the values of individual fields within a data stream.
  • Quickly resolve data quality issues with customizable follow-up actions on incompatible messages.
  • Simplify schema evolution using migration rules to transform messages from one data format to another.

"High levels of data quality and trust improves business outcomes, and this is especially important for data streaming where analytics, decisions, and actions are triggered in real time," said Stewart Bond, VP of Data Intelligence and Integration Software at IDC. "We found that customer satisfaction benefits the most from high quality data. And, when there is a lack of trust caused by low quality data, operational costs are hit the hardest. Capabilities like Data Quality Rules help organizations ensure data streams can be trusted by validating their integrity and quickly resolving quality issues."

Custom Connectors enable any Kafka connector to run on Confluent Cloud without infrastructure management

Many organizations have unique data architectures and need to build their own connectors to integrate their homegrown data systems and custom applications to Apache Kafka. However, these custom-built connectors then need to be self-managed, requiring manual provisioning, upgrading, and monitoring, taking away valuable time and resources from other business-critical activities. By expandin Confluent's Connector ecosystem, Custom Connectors allow teams to:



  • Quickly connect to any data system using the team's own Kafka Connect plugins without code changes.
  • Ensure high availability and performance using logs and metrics to monitor the health of team's connectors and workers.
  • Eliminate the operational burden of provisioning and perpetually managing low-level connector infrastructure.

"To provide accurate and current data across the Trimble Platform, it requires streaming data pipelines that connect our internal services and data systems across the globe," said Graham Garvin, Product Manager at Trimble. "Custom Connectors will allow us to quickly bridge our in-house event service and Kafka without setting up and managing the underlying connector infrastructure. We will be able to easily upload our custom-built connectors to seamlessly stream data into Confluent and shift our focus to higher-value activities."

Confluent's new Custom Connectors are available on AWS in select regions. Support for additional regions and other cloud providers will be available in the future.


Stream Sharing facilitates easy data sharing with enterprise-grade security

No organization exists in isolation. For businesses doing activities such as inventory management, deliveries, and financial trading, they need to constantly exchange real-time data internally and externally across their ecosystem to make informed decisions, build seamless customer experiences, and improve operations. Today, many organizations still rely on flat file transmissions or polling APIs for data exchange, resulting in data delays, security risks, and extra integration complexities. Confluent's Stream Sharing provides the easiest and safest alternative to share streaming data across organizations. Using Stream Sharing, teams can:

  • Easily exchange real-time data without delays directly from Confluent to any Kafka client.
  • Safely share and protect your data with robust authenticated sharing, access management, and layered encryption controls.
  • Trust the quality and compatibility of shared data by enforcing consistent schemas across users, teams, and organizations.

Additional innovations to be announced at Kafka Summit London:

Organized by Confluent, Kafka Summit London is the premier event for developers, architects, data engineers, DevOps professionals, and those looking to learn more about streaming data and Apache Kafka. This event focuses on best practices, how to build next-generation systems, and what the future of streaming technologies will be.

Other new innovations in Confluent's leading data streaming platform include:

  • Kora powers Confluent Cloud to deliver faster insights and experiences: Since 2018, Confluent has spent over 5 million engineering hours to deliver Kora, an Apache Kafka engine built for the cloud. With its multi-tenancy and serverless abstraction, decoupled networking-storage-compute layers, automated operations, and global availability, Kora enables Confluent Cloud customers to scale 30x faster, store data with no retention limits, protect them with a 99.99% SLA, and power workloads with low latency.
  • Confluent's Apache Flink early access program previews advanced stream processing capabilities: Stream processing plays a critical role in data streaming infrastructure by filtering, joining, and aggregating data in real time, enabling downstream applications and systems to deliver instant insights. Customers are turning to Flink to handle large-scale, high throughput, and low latency data streams with its advanced stream processing capabilities and robust developer communities. Following Confluent's Immerok acquisition, the early access program for managed Apache Flink has opened to select Confluent Cloud customers to try the service and help shape the roadmap by partnering with the company's product and engineering teams.

Connect with Confluent at Kafka Summit London to learn more!

Learn more about these new features at Kafka Summit London! Register here to watch the keynote presentation by Confluent's CEO and cofounder Jay Kreps about the future of stream processing with Apache Flink today, May 16 at 10 am BST.

Additional Resources

About Confluent

Confluent is the data streaming platform that is pioneering a fundamentally new category of data infrastructure that sets data in motion. Confluent's cloud-native offering is the foundational platform for data in motion-designed to be the intelligent connective tissue enabling real-time data, from multiple sources, to constantly stream across the organization. With Confluent, organizations can meet the new business imperative of delivering rich, digital front-end customer experiences and transitioning to sophisticated, real-time, software-driven back-end operations. To learn more, please visit www.confluent.io.

Confluent and associated marks are trademarks or registered trademarks of Confluent, Inc.

Apache® and Apache Kafka® are either registered trademarks or trademarks of the Apache Software Foundation in the United States and/or other countries. No endorsement by the Apache Software Foundation is implied by the use of these marks. All other trademarks are the property of their respective owners.

This press release contains forward-looking statements. The words "believe," "may," "will," "ahead," "estimate," "continue," "anticipate," "intend," "expect," "seek," "plan," "project," and similar expressions are intended to identify forward-looking statements. These forward-looking statements are subject to risks, uncertainties, and assumptions. If the risks materialize or assumptions prove incorrect, actual results could differ materially from the results implied by these forward-looking statements. Confluent assumes no obligation to and does not currently intend to, update any such forward-looking statements after the date of this release.


[ Back To TMCnet.com's Homepage ]