Implementing function calling to write custom cover letter with GPT-3.5-Turbo-0613 API

ยท

5 min read

Introduction

Recently, Open AI announced API updates and the huge thing I found is the function calling which expand the capability of GPT API by using third-parties API, for example, getting weather situation from the given location in real-time by using Open Weather API. It helps the AI model to gather information on the internet (maybe in real time) and answer the question. In this post, I implemented a function to gather job post information and write a cover letter based on my CV. I am not good at writing a prompt to make a perfect cover letter. You can help me to improve it if you found my article useful. Thank you in advance ๐Ÿ˜Š

The use case of the application is used gives the application the command with LinkedIn URL. Then, the system will return a cover letter which is a combination of the information from the user's CV and job post. This writing focuses on implementing the function from receiving the user message and then calling the function to grab a LinkedIn job post. Finally, it returns to API and generates the answer.

In this small project, I was inspired by Vercel AI SDK and openai-edge. They also have a template for building an AI chatbot. You can always check it at any time on their documentation page.

Requirements

  • Open AI API key.

  • Use one of these model gpt-3.5-turbo-0613, gpt-3.5-turbo-16k-0613, gpt-4-0613, or gpt-4-32k-0613

Implementation

To help the model understand when to use the functions, we need to declare functions with details to help it decide. I created 2 functions: one for getting job post information and another for writing a cover letter. By using the simple prompt "summary me detail of this job post: linkedin.com/jobs/view/1234567890", I expect the result would be the summary job post detail. Maybe you can give more detail prompts to list detail in the bullet points. Both the function will require a parameter named url. For each declaration, It is needed to create a function to give back the information to the API.

export const functions = [
  {
    name: "get_job_post_details",
    description:
      "Get Linkedin job post details from the given URL. Output is the string of the job post details.",
    parameters: {
      type: "object",
      properties: {
        url: {
          type: "string",
          description:
            "The URL of the job post from Linkedin to be fetched. For example: https://www.linkedin.com/jobs/view/1234567890",
        },
      },
      required: ["url"],
    },
  },
  {
    name: "write_cover_letter_for_me", 
    description:
      "Write cover letter from the given URL and using my CV for the cover letter. Output is the object content details of the job post and user's CV used to write cover letter.",
    parameters: {
      type: "object",
      properties: {
        url: {
          type: "string",
          description:
            "The URL of the job post from Linkedin to be fetched. For example: https://www.linkedin.com/jobs/view/1234567890",
        },
      },
      required: ["url"],
    },
  },
];

After sending the prompt, API will return the message that indicates it needs to use the appropriate function with the required parameter.

{
  id: 'chatcmpl-7b5aYqKc4mBmL2j74fYTLMKEZOSdf',
  object: 'chat.completion',
  created: 1689073974,
  model: 'gpt-3.5-turbo-16k-0613',
  choices: [
  {
  index: 0,
  message: {
  role: 'assistant',
  content: null,
  function_call: {
  name: 'get_job_post_details',
  arguments: '{\n  "url": "https://www.linkedin.com/jobs/view/1234567890"\n}'
}
},
  finish_reason: 'function_call'
}
],
  usage: { prompt_tokens: 196, completion_tokens: 29, total_tokens: 225 }
}

Then I will run the function based on what API returned. In this example, get_job_post_details is called. Here is the implementation of the function. It fetched the HTML page from the url then used Regex to collect an only piece of description. Then this information will be used to send back to the API for AI.

async function getJobPostDetails(args: ArgumentsType) {
  try {
    const url = args.url;
    const html = await fetch(url).then((res) => res.text());

    let clean = sanitizer(html, {
      allowedTags: ["div", "p", "strong", "ul", "li"],
      allowedAttributes: false,
      allowedClasses: {
        div: ["description__text"],
      },
    });

    const regex = /<div[^>]*class="description__text"[^>]*>(.*?)<\/div>/s;
    const match = clean.match(regex);

    if (match && match[1]) {
      const innerContent = match[1];
      return innerContent.trim();
    }

    return "";
  } catch (error) {
    console.log(error);
    return "";
  }
}

To write a cover letter, besides the prompt and job detail, we also need to attach our CV information in the payload that sending back to the model API. I also created a function named runFunction which is used to call the appropriate function and writeCoverLetter to give back both the job post and CV detail.

async function writeCoverLetter(args: ArgumentsType) {
  try {
    const jobDetails = await getJobPostDetails(args);

    return {
      job_details: jobDetails,
      cv_details: myCV,
    };
  } catch (error) {
    console.log(error);
    return {
      job_details: "",
      cv_details: "",
    };
  }
}

Here is the full code of the chat API that handles user questions to the AI model.

import { functions, runFunction } from "@/lib/functions";
import { OpenAIStream, StreamingTextResponse } from "ai";
import { Configuration, OpenAIApi } from "openai-edge";

const config = new Configuration({
  apiKey: process.env.OPENAI_API_KEY,
});

const openai = new OpenAIApi(config);

export const runtime = "edge";

export async function POST(req: Request) {
  const { messages } = await req.json();

  const initialRes = await openai.createChatCompletion({
    model: "gpt-3.5-turbo-16k-0613",
    messages,
    functions,
    function_call: "auto",
  });

  const initialResJson = await initialRes.json();
  const initialResMessage = initialResJson.choices[0].message;
  let finalRes;

  if (initialResMessage.function_call) {
    const { name, arguments: args } = initialResMessage.function_call; // Cannot named variable as 'arguments'
    const functionRes = await runFunction(name, args);

    const finalMessages = [
      ...messages,
      initialResMessage,
      {
        role: "function",
        name: initialResMessage.function_call.name,
        content: JSON.stringify(functionRes),
      },
    ];

    finalRes = await openai.createChatCompletion({
      model: "gpt-3.5-turbo-16k-0613",
      stream: true,
      messages: finalMessages,
    });

    const stream = OpenAIStream(finalRes);
    return new StreamingTextResponse(stream);
  } else {
    // if there's no function call, just return the initial response
    // but first, we gotta convert initialResponse into a stream with ReadableStream
    const chunks = initialResMessage.content.split(" ");
    const stream = new ReadableStream({
      async start(controller) {
        for (const chunk of chunks) {
          const bytes = new TextEncoder().encode(chunk + " ");
          controller.enqueue(bytes);
          await new Promise((r) =>
            setTimeout(
              r,
              // get a random number between 10ms and 30ms to simulate a random delay
              Math.floor(Math.random() * 20 + 10)
            )
          );
        }
        controller.close();
      },
    });

    return new StreamingTextResponse(stream);
  }
}

Conclusion

Hope you could find useful information in this article. Happy coding ๐Ÿ‘จโ€๐Ÿ’ป

References

ย