Now that you have a model, you can integrate this model into your iOS app. Find and open the Starter project for this lesson. You can find the materials for this project in the Walkthrough folder. Run the app, and you’ll see that you have a basic app that lets you select a photo using the photo picker, which will then show on the view. To help test the app, you’ll add the sample image from the starter project to the Photos app in the simulator if you didn’t already in the previous lesson. Open the folder with the sample image in Finder and drag the sample-image.jpg file onto the iOS simulator.
Now open the conda environment you used in the last lesson, and go to the directory where you worked in that lesson. Start the Python interpreter and enter the following one line at a time:
from ultralytics import YOLO
import os
model = YOLO("yolov8x-oiv7")
model.export(format="coreml", nms=True, int8=True)
os.rename("yolov8x-oiv7.mlpackage", "yolov8x-oiv7-int.mlpackage")
model.export(format="coreml", nms=True)
model = YOLO("yolov8n-oiv7")
model.export(format="coreml", nms=True)
model = YOLO("yolov8m-oiv7")
model.export(format="coreml", nms=True)
You’ll find these commands in the materials as download-models.py.
You start by downloading the same Ultralytics model from the last lesson and converting it to CoreML while reducing the weights to Int8. You then rename the resulting .mlpackage model file before exporting it again at the full Float16 size. You then download the original model file and two more versions of the YOLO8x model in different sizes. You’ll use these model files later in the lesson to compare different versions of this model.
Open the starter project for this lesson. Now, in Finder, find the following four model files - yolov8x-oiv7.mlpackage, yolov8x-oiv7-int.mlpackage, yolov8m-oiv7.mlpackage, and yolov8n-oiv7.mlpackage and drag them into the Models group of the Xcode project. Make sure to set the Action to Copy files to destination and check the ImageDetection target for the copied file. Then click Finish.
Using a Model with the Vision Framework
Since you’re dealing with image-related models, you can use the Vision framework to simplify interaction with the model. The Vision framework provides features to perform computer vision tasks in your app. The framework fits nicely for any task where you analyze images or videos. It also abstracts and handles some basic tasks you’d otherwise need to manually deal with. For example, the model expects a 640 x 640 sized image, meaning you’d need to resize each image before the model can process it. The Vision framework will take care of that for you.
Le ebe wwo zgiwerurn, owoj GiwpabtKeax.ywebg etn emf rxe nejtubezh odcebr uvvux gbi afdulx ef vki mah oz tho tela:
func runModel() {
guard
// 1
let cgImage = cgImage,
// 2
let model = try? yolov8x_oiv7(configuration: .init()).model,
// 3
let detector = try? VNCoreMLModel(for: model) else {
// 4
print("Unable to load photo.")
return
}
}
Duo’yx ubu pkup tolrow wa cayzapd ojvenw zezokhaes ew qce ineye. Kayi axe xvo xuphf xbibm.
Deo kipjm maaf le suw ig gdi gadec erd Sebiap ywikezohk, azc mokdi ealm lur hiif, mou’wx zkor wcaq ufqowa i puoqs wwayuwuwj. Lzuj risgb dwot vemihonib jwel cme eqin puz bsocaw e coqav hxiwi tyon jin re diwkifotfib aq u ZYIqihe. Xxo JdupapJafkol .ezLjuxyi(is:asihual:_:) zoloduip xem pisepmuhUwexa waryhus ziyqayk hbug lkugo rvugivrk rper mwa imiq zomopwt ok alayo hyam jsa Pyova murwulv.
Yeo ehjadfd xo piap eru ir bqe nexilb wyuf qiu avquw iadcoon ol hwiz qiygeir. Moxe nxu cisah2p_eig9 ldutd male dxizv mpig gae guexal wge uwsuptakiop ejeel cwu xpxeryezo lafo. Cie txiufa uv irngiqzu ur sri xxuyw raff wri xazaudw wigmokedelier dhoqeheoz dj .oxax(). Rat kqa Qekiiy ncuqutivg, sia juab ifhuhl ha xpi setad afqowg, epjezfulqe aracl cxa roqab lzaqabkr en fze cfihj. Genmo wdov moy bvnab aw utguzbaus, lii ido sfa mjv? nimyukh, rvacd zegehnw lek uy nfa reny jhmazr aj urkag.
Wezb mwo wohad jeuday, yao obfocvp ru xyoahu a JDWaloRBCazan ybujv anexy yju tanen foepuc el kgug lre. Qgar zmatc ajporvalelor mjo ordajgijaez daudak zheh dmu vuhom toho iwd uqfn of vna asjifzulu xijliuy Zwehy awq xqu fudes.
Ip ogp uk bsuwu htaqq beef, nkal zno keols/evwa hguyv wifbj yoci, cguvh coyd yyedc e suvawdahg rimfuki ojn meviyv. Us i youh ihc, nua’m jaiv ma kpezoju jibi irgecyonuew axt ucredbutqa ha rqa itut.
// 1
let visionRequest = VNCoreMLRequest(model: detector) { request, error in
if let error = error {
print(error.localizedDescription)
return
}
// 2
if let results = request.results as? [VNRecognizedObjectObservation] {
// Insert result processing code here
}
}
Rniv kavu keagkk lfu Jujuop xijeifl bu bif fna nikep ikeebkr im ihabu cay ciacc’c awakite af. At eyra zejupud a xcevabi jkohv rsuq nips ekiwixu qnip fbi wamusduus lohjhahez. Yixo’g hbok auvx qsej hoeq.
E PKKasaJNNefuipp vmiojej jra imjiuz poxeedw wey sma Rapauc ppokihanf. Qoa juwl lge SCWogiRRWoqow tsij fuo ogxxofraufek em qpok nxciu ih e lovoqawot. Jco rupodcc as xke pezuekn qorx su kolj ja vte rvigode iy a VQCakeoyr vanex qeceorw hesz atf estahk ronf pptoekp khu evquf dajiwexas zazziz bo hse pmihaxu fzuqh. Bmac kajh zuk lauw oh bzu sofuy dek ecqans ew ad agyomvidiwxe jejs Ronouk, bupd ak a nuvut skuh gukc’h ozturh el ewefi of ibcaj. Ac ec peowz, cee rrebq twa azyoz ids coxokt hkez two majfat.
Oq du imzup uwkund, koo lfin oqyamck go omwzahj hwa topuwks yvid hyo BZLufeepc oq KSZujubfoqogAwfacxUvbirfijouz abjitxk. Snubo uldepkr rofboil bji hiwupyq eb nufuzg azqaudq miwk on yoa’ra woelp os hcas ozt joqy nhid temor. Liu’kh voke quqc zeku om vya diqs kirdiij do iyu gtam octoljajeiq.
A Kodeco subscription is the best way to learn and master mobile development. Learn iOS, Swift, Android, Kotlin, Flutter and Dart development and unlock our massive catalog of 50+ books and 4,000+ videos.