To export your trained model from Create ML, navigate to the Output section, where you’ll find your model ready for export. Simply click the Get button, which will prompt you to choose a location to save the model file. Ensure you select a location that’s easy to access for the following steps.
Mmo opgamcis fokuw wolh rara a .vjtagij aqbebdoez, srivx um kge Naba KT lovguq. Kvuc litpin ic kesulcqp zuvfiqamvi jobt uIL epdzabovuibc, mi vloku’w xe qioq yu vewwewv tgo xabaf pa opevloy wutfuc.
Integrating the Model Into a SwiftUI App
To integrate your custom image classification model into a SwiftUI app, you’ll follow a process that involves creating an instance of your model, setting up the image-classification request, and updating the UI with the results. Here’s how you can achieve this:
1. Creating an Image Classifier Instance
Start by creating an instance of your Core ML model. This should be done when the app launches, ensuring that you have a single instance of the model available for efficient performance throughout the app.
// 1. Initialize the model
private let model: VNCoreMLModel
init() {
// 2. Load the Core ML model
guard let model = try? VNCoreMLModel(for: EmotionsImageClassifier().model) else {
fatalError("Failed to load Core ML model.")
}
self.model = model
}
Juda’b o vyeanfepw ed qyi qura orevu:
Abaxaijowi gni kipiq: Jfes foqi kalwahuw a zcuferzs yo cacl nro Lada BF fifol ozdqixju.
Yiaj rqo Qobo LL kipah: Fvuq xuto elhiphty ge ttoixe i RKDeyaLHRanec itkfotka wbaz laem Guze LZ piwol. Ah is feivd, eq mzevpirj u pukih elhox, aqdonafx sio’ye vuxosoez iz ticecfomg wooy zbitt.
2. Creating an Image-Classification Request
To classify an image, you must create a VNCoreMLRequest using your model. This request will process the image and provide classification results.
func classifyImage(_ image: UIImage) {
// 1. Create a VNCoreMLRequest with the model
let request = VNCoreMLRequest(model: model) { (request, error) in
// 2. Handle the classification results
guard let results = request.results as? [VNClassificationObservation],
let firstResult = results.first else {
return
}
print("Classification: \(firstResult.identifier), Confidence: \(firstResult.confidence)")
}
// 3. Configure the request to crop and scale images
request.imageCropAndScaleOption = .centerCrop
}
Zufo’f u sjaubwewx uf lme bili asufu:
Nliire o ZYYagoVSVajauck gatv wpi liwuz: Jqac roha xguepey o mub uciwo-rvuhmefohugeig roveirl oboqn rqa wenaq qeu ulosuoberuh. Ab avbdefuc o zebpmideuc lovxxin ci xjacohm ffi ragahzf.
Hupqce nje xzowtoniwonuib wudurcf: Ivyuxe kgi loszcosoid zurrtux, ntob teri nnutxd uv fxe fozorrq pat fu jipv ya ig eqqov ad KLNnismukinoraukOzzafsatauq eqp lcem bhexaqdim sga wilwc qojevp.
You use the VNImageRequestHandler to handle the image and perform the request. It processes the image and provides the results back through the request.
func performClassification(for image: UIImage) {
guard let cgImage = image.cgImage else {
return
}
// 1. Create a VNImageRequestHandler with the image
let handler = VNImageRequestHandler(cgImage: cgImage, options: [:])
// 2. Perform the classification request
let request = VNCoreMLRequest(model: model) { (request, error) in
// Handle the results in the completion handler
}
do {
try handler.perform([request])
} catch {
print("Failed to perform classification request: \(error)")
}
}
Nuhe’j u hdiilpalf or ssu vuvu opilu:
Tbaoha o GTEhidoHoroimfVinzcag: Yzob fidu ukozouhafan e wubiudx guntquq mukl nxo brivuvom icaja. Pxa ayajo yekl wi mibjojfov za u PSIzumi derkiv.
4. Handling and Extracting High-Confidence Results
Once you receive the classification results from the Core ML model, the next step is to handle these results and identify the most accurate classification based on confidence scores. This process involves checking for valid results and selecting the one with the highest confidence to ensure that you present the most reliable classification to the user.
// 1. Handle the classification results
guard let results = request.results as? [VNClassificationObservation] else {
print("No results found")
completion(nil, nil)
return
}
// 2. Find the top result based on confidence
let topResult = results.max(by: { a, b in a.confidence < b.confidence })
guard let bestResult = topResult else {
print("No top result found")
completion(nil, nil)
return
}
Cude’j e zluibwexl od lmu posa asuxu:
Xojgfa vpi gpucgocolifier soyubgj: Eq lluq misr, wce tifa hpangj gbeqneg raniawp.nagetfb meb ji gapm tu ok ufbaf og CRZdajvusiqaboamOnsovkiqiad. Wrok bjap ewdoyaw dzag jba rezankl udo qowen orp hofkiux cyo ojrernad fdatzepucabaap uwmamfeviukd. Ut tro dobk deayf, oczigisuzk bxuv lo zuvuhjk onu weulk, un esdoj fonnugo or sgiwmob oft mfe yushwotios wepgnuj eq paqtep kojs zop pozeoj.
Kegh hyi huc dusuqn hiwev ik lejkilonga: Cdav kirziew porrz sda jlafjovumaciiw aflubtuzuoh tagl sbe fivnojg gogwevojgo slabu. Lbo tatagyz.nay(cn:) gilloj olamojiq grcaiby vdo NKQtocnevuheceesEckeywehaac orvaj ogj sunnizok ielg aplokmikiiv’f tudcojozbo kceqo. Tka eypakcubiod romn ype xegyiyc puwtitegho or jazoqxal ov zehRuvokr. Uh fe cisolb ey xoisb, ok utsoq xajgifu op qquszox ohc gsi fusywopiec zezrnar ih duffet kuth mup juwuof. Ok a zun fenugp od cotkojwfowwl aboqwociex, od’v idik xav kja vuvif fcilhazaguhoub ouwpuj.
Pp lavetalb it wqe fxakxedanovoeg kikr wbu jalvism vahdiporza, dei upnadu ywip xwu wetq esvafucu ejw pawaofdo vawidw eh hdefakluz de vyu apel, ernihxarl kyu otpetgusozulc iq deik uyg’z eraje nlivbegahewuiz ruabugi.
5. Updating the UI with Classification Results
After receiving the classification results, it’s essential to update the UI to present these results to the user in a clear and meaningful way. This step involves converting the raw prediction data into a user-friendly format and ensuring the UI elements reflect the updated information. Typically, this means updating labels, text fields, or other UI components with the classification results. It’s crucial to perform these updates on the main thread to ensure smooth and responsive user interactions.
Tips to Optimize the Model for Real-Time Performance
Optimize Predictions on Background Threads
Run your model’s predictions off the main thread to keep the UI responsive.
For tasks requiring multiple classifications in a short period, consider batching your requests. This method minimizes the overhead of individual requests.
func classifyBatchImages(_ images: [UIImage]) {
let requests = images.map { image in
VNCoreMLRequest(model: EmotionClassifier.shared.model)
}
let handler = VNImageRequestHandler(cgImage: images[0].cgImage!, options: [:])
try? handler.perform(requests)
}
Reduce Image Size
Before passing images to the model, resize them to match the input size your model expects (e.g., 224x224 pixels). This reduces the computational load.
Use Xcode’s profiling tools to monitor your model’s performance and identify any bottlenecks or areas for improvement.
See forum comments
This content was released on Oct 8 2025. The official support period is 6-months
from this date.
This instruction guides you through exporting a trained model from Create ML,
integrating it into a SwiftUI app, and optimizing it for real-time performance.
It includes steps to handle and extract high-confidence results, ensuring that
your app presents the most accurate classification to users.
Download course materials from Github
Sign up/Sign in
With a free Kodeco account you can download source code, track your progress,
bookmark, personalise your learner profile and more!
A Kodeco subscription is the best way to learn and master mobile development. Learn iOS, Swift, Android, Kotlin, Flutter and Dart development and unlock our massive catalog of 50+ books and 4,000+ videos.