GPTClient
to interact with ChatGPT.
platform.openai and complete this first.
You’ll need to have already registered for OpenAI, purchased tokens, and generated a project API key. If you haven’t done this already, go toGPTClient
.
Open Xcode, select File -> New -> Playground -> iOS -> Blank, and then click Next.
Name your new Playground GPTClient.
GPTClient
type.
GPTModelVersion
and GPTMessage
within GPTModels.swift:
public enum GPTModelVersion: String, Codable {
case gpt35Turbo = "gpt-3.5-turbo"
case gpt4Turbo = "gpt-4-turbo"
case gpt4o = "gpt-4o"
}
public struct GPTMessage: Codable, Hashable {
public let role: Role
public let content: String
public init(role: Role, content: String) {
self.role = role
self.content = content
}
public enum Role: String, Codable {
case assistant
case system
case user
}
}
GPTModelVersion
is an enum
that represents the GPT model, and GPTMessage
represents the message within the body of a GPT chat request. The role
and content
are exactly as explained in the instruction section. You declare everything as Codable
to make it easier to encode and decode as JSON later.
Note: Because you added this code to a Playground, you must explicitly declare the types as
public
to make them accessible. Likewise, you must declare a custompublic
initializer for structs, as the member-wiseinit
isinternal
by default.Declaring the publicly facing types, functions and properties as
public
is good practice. This is because if you move theGPTClient
to another Swift module, you won’t have to make any changes for it to work.
extension
on Array
:
public extension Array where Element == GPTMessage {
static func makeContext(_ contents: String...) -> [GPTMessage] {
return contents.map { GPTMessage(role: .system, content: $0)}
}
}
GPTMessage
objects from an array of String
objects. You set the role
for each as .system
to indicate that these are meant to be used for context.
GPTChatRequest
and GPTChatResponse
.
struct GPTChatRequest: Codable {
let model: GPTModelVersion
let messages: [GPTMessage]
init(model: GPTModelVersion,
messages: [GPTMessage]) {
self.model = model
self.messages = messages
}
}
public struct GPTChatResponse: Codable {
public let choices: [Choice]
let id: String
let created: Date
let model: String
init(id: String, created: Date, model: String, choices: [Choice]) {
self.id = id
self.created = created
self.model = model
self.choices = choices
}
public struct Choice: Codable {
public let message: GPTMessage
}
}
GPTChatRequest
to create a request to the GPT chat completions endpoint and GPTChatResponse
to get the response.
You also need to create two types to handle error cases:
enum GPTClientError: Error, CustomStringConvertible {
case errorResponse(statusCode: Int, error: GPTErrorResponse?)
case networkError(message: String? = nil, error: Error? = nil)
var description: String {
switch self {
case .errorResponse(let statusCode, let error):
return "GPTClientError.errorResponse: statusCode: \(statusCode), " +
"error: \(String(describing: error))"
case .networkError(let message, let error):
return "GPTClientError.networkError: message:
\(String(describing: message)), " +"error: \(String(describing:
error))"
}
}
}
struct GPTErrorResponse: Codable {
let error: ErrorDetail
struct ErrorDetail: Codable {
let message: String
let type: String
let param: String?
let code: String?
}
}
GPTClientError
is simply a custom Error
that you’ll throw
if there’s either an HTTP error code or a network error.
GPTErrorResponse
yet, but this is pretty easy to understand. This is how ChatGPT will respond if there’s a problem with the request. For example, if you forget to include an OpenAI API Key, you won’t get a networking error, but you will get an error response in this format instead.
GPTClient
within GPTClient.swift:
public class GPTClient {
var model: GPTModelVersion
var context: [GPTMessage]
let apiKey: String
let encoder: JSONEncoder
let decoder: JSONDecoder
let urlSession: URLSession
public init(apiKey: String,
model: GPTModelVersion,
context: [GPTMessage] = [],
urlSession: URLSession = .shared) {
self.apiKey = apiKey
self.model = model
self.context = context
self.urlSession = urlSession
let decoder = JSONDecoder()
decoder.dateDecodingStrategy = .secondsSince1970
self.decoder = decoder
self.encoder = JSONEncoder()
}
}
model
and context
, which you’ll use to create GPTChatRequest
later. You define both as var
instead of let
properties to make them mutable.
let
properties for apiKey
, encoder
, decoder
, and urlSession
. These are properties that won’t ever change after a GPTClient
is created.
Next, you need a method to help you create a request generally:
private func requestFor(url: URL, httpMethod: String, httpBody: Data?)
-> URLRequest {
var request = URLRequest(url: url)
request.setValue("Bearer \(apiKey)", forHTTPHeaderField: "Authorization")
request.setValue("application/json", forHTTPHeaderField: "Content-Type")
request.cachePolicy = .reloadIgnoringLocalCacheData
request.httpMethod = "POST"
request.httpBody = httpBody
return request
}
URLRequest
for any ChatGPT endpoint in general.
You can next use this method to actually create a request to the chat completions endpoint:
public func sendChats(_ chats: [GPTMessage]) async throws ->
GPTChatResponse {
do {
let chatRequest = GPTChatRequest(model: model, messages: context
+ chats)
return try await sendChatRequest(chatRequest)
} catch let error as GPTClientError {
throw error
} catch {
throw GPTClientError.networkError(error: error)
}
}
private func sendChatRequest(_ chatRequest: GPTChatRequest) async
throws -> GPTChatResponse {
let data = try encoder.encode(chatRequest)
let url = URL(string: "https://api.openai.com/v1/chat/completions")!
let request = requestFor(url: url, httpMethod: "POST", httpBody: data)
let (responseData, urlResponse) = try await urlSession.data(for: request)
guard let httpResponse = urlResponse as? HTTPURLResponse else {
throw GPTClientError.networkError(message:
"URLResponse is not an HTTPURLResponse")
}
guard httpResponse.statusCode == 200 else {
let errorResponse = try? decoder.decode(GPTErrorResponse.self,
from: responseData)
throw GPTClientError.errorResponse(statusCode: httpResponse.statusCode,
error: errorResponse)
}
let chatResponse = try decoder.decode(GPTChatResponse.self,
from: responseData)
return chatResponse
}
Here’s how that works:
-
sendChats
to send messages to ChatGPT asynchronously.
You’ll use -
sendChats
converts the passed-inchats
to aGPTChatRequest
using both themodel
andcontext
. -
sendChatRequest
, which handles encoding theGPTChatRequest
, sending it using theurlSession
, and then handles decoding either aGPTClientError
in the case of failure or aGPTChatResponse
if it’s successful.
It then calls
Great! This takes care of the client. Now you’re ready to try it out.
GPTClient
on the main page for the Playground so you can run and try it out:
let client = GPTClient(apiKey: "{Paste your OpenAI API Key here}",
model: .gpt35Turbo,
context: .makeContext("Act as a scientist
but be brief"))
Remember that you MUST use your own OpenAI API key here. The one shown is temporary and won’t actually work.
Now try to send a chat!
let prompt = GPTMessage(role: .user, content: "How do humming birds fly?")
Task {
do {
let response = try await client.sendChats([prompt])
print(response.choices.first?.message.content ?? "No choices received!")
} catch {
print("Got an error: \(error)")
}
}
If everything went well, you should see a response printed to the console like this:
Hummingbirds fly by flapping their wings in a figure-eight pattern, allowing them to hover, fly backward, and maneuver with precision. Their rapid wing movement produces lift and thrust, enabling them to remain airborne and access nectar from flowers.