πŸ§‘β€πŸ’» Office Hour for the Comprehensive RAG (CRAG) Challenge

Hello all,

We invite you to join the Office Hour for the Comprehensive RAG (CRAG) Challenge. This Office Hour is a chance to interact with the organisers, gain deep insights into the dataset and problem statement, and get your questions answered.

:alarm_clock: 23rd April, 2024, 18:00 PST
:point_right: Join the Office Hour on Zoom

For those unable to attend, a recording will be available. Feel free to post your questions here, and the organisers will answer them during the event.

:video_camera: Office Hour Highlights:

  • Direct engagement with organisers
  • Collaborative discussions with other attendees
  • In-depth understanding of CRAG benchmarks
  • What’s next in the challenge
  • Live Q&A

:woman_teacher: Meet the speakers

  • Xiao Yang: Applied Research Scientist at Meta Reality Labs, PhD in Statistics from Yale, focusing on retrieval augmented generation.
  • Kai Sun: Research scientist at Meta, PhD from Cornell, organizer of Gomocup and chair for major NLP conferences.
  • Xin Luna Dong: Principal Scientist at Meta, expert in building intelligent personal assistants and knowledge graphs, ACM and IEEE Fellow.

:speech_balloon: If you can’t attend, leave your questions in the comments, and the organisers will be answered during the session.

:spiral_calendar: Mark your calendars, prepare your questions, and join the live Office Hour.

Looking forward to seeing you there!
Team AIcrowd

2 Likes
  1. Would it be acceptable to utilize a fine-tuned version of the llama2 model in the huggingface, even though it’s open source? Or should we refrain from starting with any models other than those obtained from http://ai.meta.com/, Download Llama, or TheBloke/Llama-2-70B-GGML Β· Hugging Face?
  2. How to verify whether a submitted model comes from an original llama2 model mentioned above
  3. When submitting, is it okay to upload additional data for task retrieval purposes?
    For instance, in task1, can one upload supplementary retrieval corpora so that one can retrieve from the supplementary retrieval corpora and utilize it as well as search results to generate the answer?
1 Like
  1. Is it allowed to use llama-3 models?
  2. Is the submission limit for each task or the total number of all tasks combined?
  3. How many teams will be selected in phrase 2?
2 Likes

Where will the recording be stored?

1 Like
  1. Will the 10-second time limit be relaxed? 10 seconds is just too short, especially for task #3.
  2. When conducting evaluation after submission, the inference time of the model, (such as llama 7b chat), on 2 * T4 is significantly greater than the inference time I experience locally using 1 * RTX3090. This makes it difficult for me to estimate the inference time of my solution after submission. Moreover, if any data point times out, the entire evaluation process terminates, which is not ideal. Do you have any suggestions or improvements for this?
1 Like

Will there be recordings?

1 Like

@jeongeum_seok @ry_j Recording and slide deck will be shared in the next 24 hours. The link will be posted on discourse and shared through email as well.