Prompt Engineering in Action: How AI Matters

| minute read

In our recent internal workshop, Hristiyan Dimitrov led a hands-on session on prompt engineering for developers—a session that was merely about real practice and ready-to-test prompts.

It set the stage for our upcoming live coding event with AI agents, showing that generative AI already matters for how developers build, learn, and debug software.

Why Are Developers Turning to AI Tools Instead of Search Engines?

Hristiyan began by drawing a simple but powerful comparison:
Google retrieves information; AI understands it.

While search engines excel at listing results, Large Language Models (LLMs) like GPT or Claude provide contextual, conversational support tailored to a developer’s exact problem.

Even his 13-year-old cousin has made the switch:


"It gives me exactly what I need, he said."

That simple statement captures a larger shift—how we now expect our tools not just to find information, but to think with us.

What’s Really Happening Behind the Scenes of LLMs?

To make sense of the AI’s magic, Hristiyan broke down a few key mechanics:

  • Tokens – The currency of AI conversations. Each word (or part of one) costs tokens, so understanding how prompts affect usage helps control both cost and performance.
  • Temperature – The creativity dial. Lower values give precise answers; higher ones add imagination (and the occasional nonsense).
  • Hallucinations – When AI confidently invents things that sound true but are not. The prediction models work just like that - it has to give a guess.

He illustrated this with examples from OpenAI and Anthropic, reminding us that even professionals in law or finance have been caught off guard by AI-generated “facts.”

How Secure Is AI—And What Should Developers Watch Out For?

Security, Hristiyan noted, remains one of the biggest blind spots. He outlined three pillars every developer should keep in mind: availability, confidentiality, and integrity.

  • Prompt injection mirrors SQL injection—attackers manipulate input to extract sensitive data.
  • Data poisoning happens when malicious code sneaks into public repositories that models later train on.
  • Real-world examples ranged from biased image classification to chatbots generating offensive responses—all consequences of unguarded data flows.


How Can Prompt Engineering Boost Developer Productivity?

Hristiyan showcased several real use cases where AI already makes developers faster and smarter:

  • Debugging smarter: Instead of asking “What’s wrong?”, try “Walk me through this function step by step.”
  • Learning faster: Compare familiar and new concepts—like “Explain Docker to me like I know Spring Boot.”
  • Testing better: Let AI draft unit tests, mock dependencies, or suggest refactors for legacy code.

What Happens When You Let AI Do the Coding?

To prove it’s not just theory, Hristiyan went live with Cursor, an AI-powered IDE. In minutes, he built a ticketing system from scratch—by giving instructions, not writing code.


The AI handled everything:

  • Creating a Kanban board
  • Adding user detail tabs
  • Fixing routing issues in real time

"I never wrote this code. I just gave it instructions,"

One of the most common developer questions came up during the Q&A: What’s the right prompting strategy—chunking or full blocks? Hristiyan’s advice: Experiment.

Shorter, focused prompts usually perform better, especially when working with tools like Cursor or Codex. Overloading the context window often leads to hallucinations, so clarity beats length every time.

We thank Hristiyan for the enriching and practical session and would like to invite everyone to check the video highlight of the stream - check out this link.

Search