CPDC Winner Spotlight: 💡 Strategies to Improve Solutions For Task 2

Hope you’re enjoying Round 1 of the CPDC 2025 Challenge! As you prepare for the upcoming round, we’re excited to share a spotlight on the winning strategies from CPDC 2023. These highlights offer practical insights and implementation tips to help strengthen your approach.

:bulb: The solutions for Task 1 of CPDC 2023 (a dialogue generation task) are most closely related to Task 2 of CPDC 2025, which also focuses on persona-consistent dialogue generation.


:one: Task 1 Winner
:1st_place_medal: First Place: Kaihua Ni
:bulb: Key Insight: Combining LLM Fine-tuning with Advanced Prompt Engineering
Username: @ni_kai_hua

Background: AI graduate from University of Leeds with experience at Augmentum and CareerBuilder. Specialises in AI, deep learning, and language dynamics.

Winning Strategy:

Two-Pronged Approach:

  • Fine-tuned an LLM to emulate specific individuals
  • Engineered precise, persona-aligned prompts to guide output generation

Key Methods:

Fine-Tuning with Transfer Learning:

  • Used curated datasets (dialogues, writings) aligned with target personas
  • Adapted models to reflect individual styles and semantics

Advanced Prompt Engineering:

  • Defined clear conversational goals
  • Subtly incorporated persona traits
  • Maintained coherence across multiple dialogue turns

Dialogue Coherence:

  • Applied attention window tuning and context control

Custom Evaluation Loop:

  • Built bespoke evaluation metrics aligned with CPDC scoring
  • Iterative refinement based on metrics

Ethical Safeguards:

  • Embedded privacy protections
  • Prevented harmful/inappropriate content
  • Ensured ethical persona emulation

Insight: Demonstrated how LLMs can generate nuanced, human-like dialogue without compromising integrity

:computer: Implementation Tips
Want to apply Kaihua’s approach to your solution? Here are some practical steps:

For the fine-tuning component:

  • Start with a smaller, more efficient LLM as your base model
  • Create a curated dataset that specifically represents your target personas
  • Focus on preserving stylistic elements in your training data, not just semantic content

For the prompt engineering component:

  • Structure your prompts with clear sections for conversation goal, persona traits, and dialogue history
  • Experiment with different attention window sizes to find optimal context retention
  • Implement a simple evaluation loop to measure improvements against CPDC’s scoring criteria

:two: Task 1 Runner-Up
:2nd_place_medal: Second Place: Zhiyu Wang
:bulb: Key Insight: Principles-Driven Prompt Engineering for Persona Alignment
Username: @wangzhiyu918
Team: Zhiyu Wang, Puhong Duan, Zhuojun Xie, Wang Liu, Bin Sun, Xudong Kang, Shutao Li

Background: PhD candidate at Hunan University focusing on vision-language understanding, LLMs, and multi-modal LLMs.

Winning Strategy:

Core Focus: Prompt engineering inspired by recent LLM advancements (ChatGPT, LLaMA)

Key Methods:

  • Studied The Art of ChatGPT Prompting guide
  • Based strategy on three principles:
    • Clarity: Specific language for accurate comprehension
    • Conciseness: Avoided unnecessary verbosity
    • Relevance: Ensured alignment with dialogue context and persona
  • Refined prompts using GPT-4
  • Deployed carefully designed prompt (available in their repository)

Insight: The methodical and prompt-focused design contributed to generating highly coherent, persona-aligned responses

:computer: Implementation Tips
Want to apply Zhiyu’s approach to your solution? Here are some practical steps:

Study effective prompting techniques:

  • Review prompting guidelines and best practices from established sources
  • Analyze the structure of successful prompts for persona-based dialogue

Apply the three core principles:

  • Clarity: Replace vague instructions with specific directives
  • Conciseness: Remove redundant or tangential information from prompts
  • Relevance: Ensure every element of your prompt directly contributes to persona alignment

Iterative refinement:

  • Use GPT-4 or similar models to test prompt variations
  • Create a systematic testing framework to compare prompt performance

:three: Task 1 Third Place
:3rd_place_medal: Third Place: Kaidi Yan
:bulb: Key Insight: Strategic Minimalism in Prompt Design
Username: @kevin_yan
Team: Kaidi Yan, Jiayu Liu

Background: Software engineer at a large technology company, primarily working on server-side C++ development, with recent focus on LLMs.

Winning Strategy:

Core Focus: Targeted prompt engineering, carefully adapted to new scoring rules and aimed at simulating natural dialogue flow

Key Methods:

  • Defined clear objective at the start of the prompt
  • Designed special prompts for initial utterances to simulate realistic conversation openers
  • Merged all prior utterances into a single user prompt instead of user/system pairs
  • Post-processed model responses for completeness and fluency
  • Deliberately kept prompt length short to avoid overfitting

Insight: While brevity may have limited peak performance, his approach prioritised adaptability and relevance — a strategic trade-off for generalisation

:computer: Implementation Tips
Want to apply Kaidi’s approach to your solution? Here are some practical steps:

Simplify your prompt structure:

  • Start with a clear, concise objective statement
  • Remove unnecessary complexity and instructions
  • Focus on the essential elements needed for persona alignment

Improve conversation handling:

  • Create specialised handling for conversation starters
  • Experiment with merging dialogue history into unified context
  • Implement lightweight post-processing for response quality

Balance brevity with performance:

  • Test incrementally shorter prompts while monitoring performance
  • Identify which prompt elements contribute most to score improvement
  • Find the optimal balance between prompt length and effectiveness