Thoughts about Kafka Summit 2020

Thousands of companies globally build their businesses on top of Apache Kafka® – a community distributed event streaming platform capable of handling trillions of events a day. Kafka has quickly evolved from a messaging queue to a full-fledged event streaming platform, and at VirtusLab we also take advantage of its capabilities.

kafka_summit_header_2-min

VirtusLab @ Kafka Summit Event Streaming Everywhere

Kafka Summit is the premier event for data architects, engineers, DevOps professionals, and developers who want to learn about streaming data. It brings the Apache Kafka community together to share best practices, write code, and discuss the future of streaming technologies.

Kafka Summit 2020 was hosted virtually. As sponsors, we had the opportunity to participate in the 100% virtual event for the first time. It turned out to be a completely different experience for us – thanks to the support of the Organizers, a positive one. We definitely saw potential in this type of conference. Especially considering the huge audience (22k) from all over the world. Lack of restrictions is also conducive to collecting contacts, which from the business point of view was a priority for us. Here we would suggest that the opportunity to reach business participants be easier and shorter, respecting the preferences of each participant. We are aware that most developers would not be interested in such contacts, but some participants would appreciate such a facility. We can see some room for improvement here, but we realize that fully remote events on this scale are still quite a challenge. In conclusion, we are glad that we were able to attend the Kafka Summit as a sponsor.

Daniel Jagielski @ Kafka Summit

Daniel Jagielski is a tech lead at Virtuslab, managing a team building an identity management platform. He is passionate about using the best practices in designing distributed software systems. Nowadays, he focuses mainly on stream processing and event-driven reactive architectures using JVM-based languages, Kafka, and Kubernetes. Daniel always strives to provide a tailor-made solution for the given requirements and to find the correct balance between “do the right thing” and “do the thing right,” using domain-driven design techniques and the most appropriate tech stack.

Daniel has presented the journey that the Identity team has gone through with the Risk Engine project, which is based on stream processing and Kafka. The key emphasis was put on the learnings after building and maintaining the project on production for over 2 years. Daniel has presented how the takeaways from initial failure were converted to success in the next iteration of the product. After the session, Daniel has given answers to the most popular questions around performance testing and the platform’s maintenance.

If you miss Daniel’s talk Risk Management in Retail with Stream Processing here is the opportunity to watch it online!

image for article: Thoughts about Kafka Summit 2020

The results of the contest

During the Kafka Summit, at our virtual booth, we prepared a competition for conference participants. The prize for answering the competition question was the game “The Witcher”. Congratulations to all winners! More info here.

Written by

Natalia Romanowska
Natalia Romanowska Sep 29, 2020