As AI continues to reshape how we think, create, and work, one of the most critical skills today is knowing how to work with it. That’s exactly what Mr. MSI Sakib, Founder and CEO of Prothom Tech and Co-Founder of Yaaply, explored in his session at the AI Hackathon 2025. Titled “Mastering GenAI: Selecting the Right Tools and Crafting Impactful Prompts,” the session offered an in-depth look at evaluating GenAI tools and engineering prompts that drive meaningful results.
The Right Tool for the Right Job
Mr. Sakib opened with a simple yet often overlooked truth: AI tools don’t come with manuals. Unlike traditional software, generative AI systems like OpenAI, Claude, Gemini, DeepSeek, and Perplexity are designed to be flexible and that flexibility can be overwhelming without the right understanding.
To use GenAI tools effectively, we must align their capabilities with our specific needs.
Know the LLM Before You Prompt
Different LLMs have different strengths. Mr. Sakib walked through a range of technical dimensions to evaluate before choosing an AI model:
- Token Limits: Gemini Pro currently offers the highest input token size, up to 2 million, enabling more context in a single interaction. For reference, one token equals roughly 4 characters or ¾ of an English word.
- Coding Capability: If your use case involves programming, make sure your selected LLM is proficient in generating functional and syntactically correct code.
- Real-Time Data: For applications that require up-to-date information, LLMs with browsing capabilities like OpenAI and DeepSeek are essential.
- File and Image Input: Traditional OCR is becoming obsolete thanks to LLMs that can now read and interpret files or images. This is especially helpful for visual projects or data extraction.
- Image Generation: Tools like DALL·E can create high-quality, realistic visuals. Mr. Sakib emphasized testing for:
- Practicality (Does the image make sense?)
- Detail Accuracy (Are fingers, facial features, and text rendered properly?)
- Variations (Can the tool generate different angles or versions?)
- Copyright Sensitivity (Are the generated assets safe for public use?)
- Practicality (Does the image make sense?)
- Bias & Neutrality: AI models may reflect geopolitical or cultural biases. Some avoid politically sensitive topics altogether. Knowing the origin and training data of an LLM helps anticipate how it might respond to sensitive prompts.
- Content Tone: Tools like Claude, trained on open libraries and books, excel at setting tone, making them ideal for storytelling or emotionally nuanced content.
- Control & Resistance: Some LLMs may reject prompts they deem “harmful” or unethical. Mr. Sakib shared an example: a prompt to create a billboard that says “No jobs in Toronto” was refused by the AI. Understanding the boundaries of each tool helps avoid dead ends.
- Logical Reasoning: One of the ultimate tests for GenAI is its ability to think. Presenting hypothetical situations (e.g., “Should an illiterate but skilled driver get a license?”) is a good way to evaluate a model’s reasoning ability.
Prompt Engineering: A New-Age Skill
Prompt engineering is no longer a fringe task, it’s a profession. According to Glassdoor, salaries for prompt engineers range from $143k to $238k, reflecting how critical the skill has become.
But with power comes risk.
Mr. Sakib warned participants against sharing confidential, internal, or sensitive data with AI tools. LLMs may store interaction data or reuse content in future completions especially if similar prompts are received from other users. This makes it essential to avoid sharing unreleased scripts, private opinions, or corporate strategies.
The 3 Pillars of a Powerful Prompt
A well-structured prompt includes:
- Persona – Define who the AI should emulate. Is it a poet? A marketer? A scientist?
- Goal – Be clear about what you want: a summary, an image, a plan, etc.
- Context – Set the scene. Include necessary background so the AI understands the full scope.
Example:
“You are an environmental scientist. Your goal is to create a public awareness poster. The context is: Dhaka’s air pollution has worsened significantly over the past year.”
Pro Tips & Advanced Tactics
- URL Pre-Training: Feed chunks of content (e.g., from a webpage) before issuing your prompt to make responses more aligned with your desired topic.
- Follow-up Prompts: Asking follow-up questions can improve depth and clarity in responses, especially when working iteratively.
- Prompt Reverse Engineering for Images: Show the AI an example image and ask it to generate the prompt that could produce something similar, a handy way to build a style template.
Final Takeaways
Mr. Sakib closed the session with a reminder that generative AI is entering a pivotal phase, impacting industries, workflows, and careers. As we navigate this new landscape, knowing how to work with AI—and which tools to trust, will become as important as the outputs we expect from them.