CLEAN_PROMPTS=""" """ SELECT_QUESTION_PROMPT = """ Given the most unique answer, evaluate the following **questions ** and decide which one best matches the answer. The higher the match between the question and the answer, the higher the score. Please rate each question and answer pairing on a scale from **1 to 5**, with 1 being the worst match and 5 being the best match. Then, give a brief reason why the question best matches the answer. ### # ** Rating Criteria ** : - **5** : Perfect match - The question is exactly the same as the answer, covering all the key information for the answer. - **4** : High match - The question and answer are mostly consistent, and basically cover the core content of the answer. - **3** : Medium match - The question partially agrees with the answer, but does not match exactly, or the answer does not fully cover the requirements of the question. - **2** : Low match - There is a gap between the question and the answer, and more details may be needed to match. - **1** : Very low match - the question has little to do with the answer, or the answer does not match the question at all. ### Note that you should also include in your evaluation criteria whether the question is asked about the recommended functional group. If so, the score should be higher, if not, the score should be lower. ### ** Inputs: ** 1. ** unique answer **: {ANSWER} 2. **questions **: {QUESTIONS} ### ** Output format: ** - Score how well each question matches the answer in the following JSON format: ```json { "questions": [ { "id": 1, "score": xxxx, }, { "id": 2, "score": xxxx, }, { "id": 3, "score": xxxx, }, ... ] } ``` """