Skip to content

Behind the Scenes: CFP Selection for AWS Community Day CEE

The truth about what happens when you hit “submit” on your talk proposal

There is no perfect CFP selection process. It’s one of the hardest jobs in event organization, and you’ll only truly understand this when you’re on the other side of the table. As a speaker, you’re left waiting, hoping for that approval email, never knowing what happens behind the scenes or why your carefully crafted proposal might get rejected.

Let me take you inside our selection process for AWS Community Day CEE. Perhaps this transparency will help other Community Days refine their own CFP processes — and maybe give future speakers some insight into what we’re really looking for.

The Foundation: Building Your Selection Team

The most critical decision isn’t your scoring system or evaluation criteria — it’s assembling the right team. Never decide alone or in pairs. The ideal selection team consists of 5-10 people, and diversity isn’t just a nice-to-have — it’s essential for quality decisions.

We ensure at least two women are part of our selection committee. This isn’t tokenism; it’s about recognizing that diverse perspectives lead to better speaker lineups. The more evaluators you have, the clearer picture you’ll get of the top selections, but don’t exceed 10 people — that becomes ineffective and unwieldy.

Our Four-Round Selection Process

Round 0: Setting the Stage

Before any evaluation begins, we prepare the submissions for fair evaluation. We filter out duplicates, spam, and incomplete submissions to focus on legitimate proposals. Personal contact information like email addresses and phone numbers are removed from review materials for GDPR compliance, leaving reviewers with only the essential information: speaker names, talk titles, abstracts, speaker bios, and LinkedIn profiles.

Since we’ve already secured AWS speakers for keynotes and special sessions, we focus our CFP exclusively on community voices for the Main Stage presentation track. This approach ensures we maximize the unique perspectives that make community events special — the real-world experiences, hard-won lessons, and honest insights that complement official AWS content.

Round 1: The Numbers Game

Each reviewer selects their TOP 10 speakers using our point system:

  • 1st choice: 12 points
  • 2nd choice: 10 points
  • 3rd-10th choices: 8, 7, 6, 5, 4, 3, 2, 1 points

Here’s what’s interesting: each voter’s first two submissions count extra, allowing them to significantly influence the top selections. But here’s the surprising part — more than half of our selected speakers never received a single maximum score of 12 points.

We evaluate based on:

  • Stage performance ability – Can they actually present well?
  • Previous presentation quality – We watch recordings when available
  • Content uniqueness – Fresh perspectives win over recycled content
  • Regional diversity – Avoiding speakers who’ve recently covered similar ground in CEE

Round 2: The Reality Check

We aggregate scores and handle the messy reality of multiple submissions per speaker. Through majority vote, we select one talk per multi-submission speaker. We ensure topic diversity — because nobody wants a conference that’s 80% GenAI presentations — and identify three backup speakers for the inevitable cancellations.

Round 3: Confirmation Anxiety

Selected speakers get invitations, and we hold our breath. Cancellations happen, and we had two this year — painful for a conference of our size, but you have to adapt.

Round 4: Going Public

The final lineup goes live across awscommunity.eu, Eventbrite, LinkedIn, and our other channels.

What We Learned: The Good, Bad, and Surprising

The Anti-Bias Rule That Works

We implemented a crucial rule: speakers need votes from at least two different evaluators to make it to the final selection. Some submissions received maximum points from just one passionate advocate but were ruled out to prevent bias. This might seem harsh, but it ensures broader appeal.

What Our System Doesn’t Solve

Despite having women on our selection committee, we still struggle with speaker diversity. The pipeline problem is real — we can only select from who applies. Including diverse voices in your selection process helps, but it’s not a magic solution.

Our Honest Mistakes

Multiple Submissions Per Person: We didn’t limit submission numbers, leading to some speakers submitting up to five different abstracts. This diluted their scoring potential and made selection unnecessarily complex.

Workshop Wishful Thinking: We really wanted community workshops, but almost no one chose this category. Speakers come prepared for presentations, not interactive sessions.

Session Length Confusion: Offering 30, 60, and 90-minute sessions (mainly for workshops that didn’t materialize) was a mistake. People can’t focus that long, and you sacrifice having more diverse sessions. Next time: 30 or 45 minutes maximum.

Post-Selection Analysis: Understanding Our Biases

At the end of our process, we ran an AI analysis of our selection patterns. The results were eye-opening:

Reviewer Insights

  • Linda Mohamed: Focused on security and advanced architecture
  • Lucian Patian: Emphasized serverless and data solutions
  • Lydia Delyova: Balanced approach across IoT and infrastructure
  • Michal Salenci: Security and AI-focused selections
  • Mihaly Balassy: Cost optimization and architecture emphasis
  • Philipp Bergsmann: Technical depth and emerging technologies

Quality Indicators

  • Strong representation across all AWS service categories
  • Balanced mix of theoretical and practical sessions
  • Good distribution of foundational to expert-level content
  • High proportion of real-world case studies and experiences

The Bottom Line

Our selection process isn’t perfect, but it’s transparent, systematic, and constantly evolving. We prioritize community voices over corporate messaging, diversity over convenience, and real-world insights over marketing polish.

To future speakers: we’re not looking for perfection. We’re looking for authenticity, unique perspectives, and the kind of hard-won wisdom that only comes from actually building things in production. Your war stories, your failures, your “here’s what nobody tells you” moments — that’s what makes community conferences special.

To fellow organizers: steal whatever works from our process, but remember — the system is only as good as the people implementing it. Invest in your selection team, be transparent about your criteria, and don’t be afraid to evolve your process based on what you learn.

The CFP selection process will always be imperfect, but with the right approach, it can be fair, effective, and focused on what really matters: delivering exceptional value to the AWS community.

Published inCommunity
0 0 votes
Article Rating
Subscribe
Notify of
guest

This site uses Akismet to reduce spam. Learn how your comment data is processed.

0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments