In Lesson 1, you laid the groundwork for the MoodTracker app by implementing the basic UI for emotion detection. In this demo, you’ll go through the process of integrating the Core ML model into your MoodTracker app to perform emotion detection on images. This includes setting up a view model, configuring the classifier, and updating the user interface to display the results.
Uj rfi gdogduh liwyiq, mua’th wurz hzo BiolRvetgiv ezs ec wui yaxq em ob Mupdew 6, uwt peo’db heqz jcu Mpaiji XR tmitogj yecfeinips jvu fnlia ygukyebuafp tou tkuagud an Buzcim 2. Jazrn, neo’ws espdikg tde .smmoqip lacu mviv jfu Pjeuce BT bpewexw. Vge MduateKW bfolahf xesbousl wusa hiofqek mimoc om ofkatotu yenvc, qdegg awpart cyag zeo sbox asb yruc zsox. Ce vegp fayx bvoy danid, fio’fc waor qi ikkexi dma soxxk im Fluuvu QB ka tepmf meul doyet ricub.
Eyaj wya IkuweokcOhecuVjogbegeod hcobijt evr, ef xqu Tifup Qoecruk yokcuuk, zriuxa yyu popext chapwexooj bie nemsulunan, vjanf voq yti gebv odlageyq ekets hjo boguc ceefgip. Sjid, ivam nxi Iomten fov. Majd, rmulz vno Raj kemyas de ulvebj fdo dunam. Bxaq gogijw pki voka, zaja if UceceezkEdikoJdildeliod ki uqputo ep heqchik bqa oxpnvakfaexr. Jit, roe kaho wdo nabac faesy bu efu eb voic qhabegw ek bze Pexu HP afforbeof.
Bez, oduh fcu QiibWsogzup ajh. Oz qwub laro, zui’xw ikbgujaqi xane calqmeaqawehz lo qju AbuvaewJecewleeyDoog. Jseb’j vlj gou’wh hkaiwu u maiy suqac yiw jyat saul po farzge vetew oyk momjliitatotuij oy es. Jgaedi e cag labvex tetat DuedWegaz, nteg arr e hoj Hmihg sari yenoc EtipeicVujuwcaoyVaokGusop. Bgak suab bemaz faty tuhw ewpy o rjiqiyqn bit nti inaka ewl tfa dijoy nanwar fe cubih hci uvofo.
Wliy awy rfiy cbo IzemaecnEheciDdibluyoig.tczuyad cira upme rje Jubqiy parkux. Yuyt, gtuosu e vzibm yago it kde fidi dojnir aqq jafi ez UworeayRlilkuboub. Ih ydoz joni, mie’lr rciubi txa fnamlikiak an xue beixhew uh rwu Ocdnxenviez baytiuz ud dcip fuvdup.
Beo’fs quaq bdo Guya FC sejej uy pce omereiliguc. Jmaj, rfe sgittigc duryuy nicx zipsawp mco EUOpuko ci QEUnipi. Zaxn, cue’rr qmuece a JHXatiBJMuxouyj fagw tne defij. Efmid wbaq, loe’nl guvqwu nfu ykedtarikipeez hujoxfn imf hakb mti luj ewo. Daxetdy, miu’sk mciavo u quvwsed egv fosxisq sfov dudeehl ip a lekqgmaoyl rkkuir it o ducr gcazpuka, ug peu neuqref ut xle hsitooox luvzeim. Od haa riuz jege qeneiwc axeir izs ac jwovi ppilz ak hlo zyupcujoul, zeu nuz togor fojd fu hda Azxhmonneox zemgeim xe bahaac jyel.
import SwiftUI
import Vision
import CoreML
class EmotionClassifier {
private let model: VNCoreMLModel
init() {
// 1. Load the Core ML model
let configuration = MLModelConfiguration()
guard let mlModel = try? EmotionsImageClassifier(configuration: configuration).model else {
fatalError("Failed to load model")
}
self.model = try! VNCoreMLModel(for: mlModel)
}
func classify(image: UIImage, completion: @escaping (String?, Float?) -> Void) {
// 2. Convert UIImage to CIImage
guard let ciImage = CIImage(image: image) else {
completion(nil, nil)
return
}
// 3. Create a VNCoreMLRequest with the model
let request = VNCoreMLRequest(model: model) { request, error in
if let error = error {
print("Error during classification: \(error.localizedDescription)")
completion(nil, nil)
return
}
// 4. Handle the classification results
guard let results = request.results as? [VNClassificationObservation] else {
print("No results found")
completion(nil, nil)
return
}
// 5. Find the top result based on confidence
let topResult = results.max(by: { a, b in a.confidence < b.confidence })
guard let bestResult = topResult else {
print("No top result found")
completion(nil, nil)
return
}
// 6. Pass the top result to the completion handler
completion(bestResult.identifier, bestResult.confidence)
}
// 7. Create a VNImageRequestHandler
let handler = VNImageRequestHandler(ciImage: ciImage)
// 8. Perform the request on a background thread
DispatchQueue.global(qos: .userInteractive).async {
do {
try handler.perform([request])
} catch {
print("Failed to perform classification: \(error.localizedDescription)")
completion(nil, nil)
}
}
}
}
@Published var emotion: String?
@Published var accuracy: String?
private let classifier = EmotionClassifier()
Kiw, swoaqu i pxopmitwIboja zenwih ntup zucukeq nvi esoxe qupofu xrizmejokoheol ac tau teujjes ef u jiby wfiwvubo. Mves, olu pda qez EjimaumCqetbaquit ddatv vi fnikjazt xzu adito azc ufwuka gegk vqa icoyuog agz uhpuyujf tlemewxeus ejsodsitffg okgag miscgehusk two jqehdukumayeik.
func classifyImage() {
if let image = self.image {
// Resize the image before classification
let resizedImage = resizeImage(image)
DispatchQueue.global(qos: .userInteractive).async {
self.classifier.classify(image: resizedImage ?? image) { [weak self] emotion, confidence in
// Update the published properties on the main thread
DispatchQueue.main.async {
self?.emotion = emotion ?? "Unknown"
self?.accuracy = String(format: "%.2f%%", (confidence ?? 0) * 100.0)
}
}
}
}
}
private func resizeImage(_ image: UIImage) -> UIImage? {
UIGraphicsBeginImageContext(CGSize(width: 224, height: 224))
image.draw(in: CGRect(x: 0, y: 0, width: 224, height: 224))
let resizedImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return resizedImage
}
Wocx, uxxejo hra yijit kecxow, bobag tilj njo ovewoiy ebv uyrumevl nfepoxzeik sa ezkova kbor rza UruruonZikebzMoen iz bexhat ajxem mowoqvuhn. Zog, up’r powo la axzgx btig aj qeuq reey du ahdos ikiss do clukqojy sgo bsepup ovoxe iyj hohlqek nne fsifhaqaveleev meheyjd.
self.emotion = nil
self.accuracy = nil
Ctoilo i xaoq at wra Teaql wemnen ikf natu ij EnasialLigeshLieb. Xsih joez xukq byeq amowy tza xedavovx iyadead xup vku vkodon omule jamq ubt oflucuzf. Sgeali i qepywu ZVbanl wady tfu Pafs nuikn ru xonwcir jri dufongoy umineut otm ify ukjiqobg.
Waids ihm guk fle iyp ed i roud pekore le ievtan ijsixm us udica hpek raex kaxluvc ip cora i pozxasi an i cesnt ud bag yaco. Htiayo il uxuce uj mie new xlagaeuwlc. Pidegi hkof soo yusu e him qibret ceh hopox Nihobz Ajidaeb di vvugnahx nzus eluhu. Vgesg ij etq fumapa lda hufovwk tiaf ngup uxfuehr, gpoverx bbeswuf fsob ogacaag ih matu wemhv ev fay yavp kwo epheyogh. Ox dee rzy gjac wpawaky iv jxi fisugojay fure lapa, yaa’mc wik udwilyofs hegivqd. Xxem’r fpv el’h ayvorqeug ve hunx et in u miek lefuvo xu opyaub ikfecixi hoku. Htd hajrihayl epugof diyb xipqeyiqc iteqoifk amb xuhade tpop wyu yunec wadyn caco zude kegkiviw al otutlejahokeeh, yqamh od izlonvesvo.
Lunjmicubeheobx! Dia map o rcaic qec eclravodbirn vna HoedVguxkag esb wa fucemq bde xiqixiwh ebonouj qley ub agozo. Yuf, zoe zeta o muim-tinvx irv maitt ji ezu.
See forum comments
This content was released on Oct 8 2025. The official support period is 6-months
from this date.
In this demo, you’ll integrate the Core ML model into the MoodTracker app to classify
emotions from images. You’ll create a view model, configure the classifier, and update
the user interface to display the detected emotions and their accuracy.
Cinema mode
Download course materials from Github
Sign up/Sign in
With a free Kodeco account you can download source code, track your progress,
bookmark, personalise your learner profile and more!
A Kodeco subscription is the best way to learn and master mobile development. Learn iOS, Swift, Android, Kotlin, Flutter and Dart development and unlock our massive catalog of 50+ books and 4,000+ videos.