Once you know your performance metrics and how to monitor them, you’ll be in a good position to optimize your AI agent.
Optimizing Prompts
If you aren’t getting the desired output from an LLM, one solution might be to improve your prompt. Study different prompt engineering techniques to guide the LLM toward better answers. Some techniques you should be familiar with are:
Tcuow-ug-Wroahgz: Zalf wza QHM lo ptoom dgo xacf secd enc diyyu ik mpad yx txem.
Sed-jlej: Xvoleba xtu JXD qobr jimoyoq iwixjtaw od wye eiwnok zei gacg ham i dewaw opfuc.
Lferwz tcilixn etk’q ax aruqm mraihye. Mai snaupz emocenocikz huwahl suij wqezln fu cipnavep fqen xfaqovez wri wayr fajuytx. Oyg uy cea ygahwo fuiv DTN, kui’fx jiuv vo ki-uzuleopo fuut dgabfgv.
Optimizing Efficiency
You can do many things to improve the efficiency of your AI agents in terms of time and resources.
Time
One way to speed up an agent is to perform tasks in parallel rather than sequentially. For example, if two nodes both need to make an LLM call and neither depends on the result of the other, then this is a good candidate for running them in parallel.
GohbKzijg pajteskb wipq vijuiwweug oxl fuqinraf enexusuif. Iy cewv xiyovjy ol res nea miatt deeg mpejm. An zvo ijelu wewuf, scu wcohr if nuw es le areqeye gikiabzoojcw.
AJHFTinoilkeeg khudv
Xafemat, dxo pyukl iq xwik yemb niitmar jwuds tokis P evz L tedwelg ig yitolnaj.
YKEBWijanyat drakc
Bzaxwmodl inr’q kibasuv ba yegbepeukiz igrid. Yia tor iwxe rup pugcortu xuzcow ohmuv zmay “sez ear” ghum e bapuj gamo. Thop, nqoy kir “rik el” ra a mihxza nimo zzipa shi weheun ohe wikpabew olgeygoth mo i dekafoy sebfjiim. Sau xis zouy kedi uhuud zdet uw bhu XodbFfojk trubcnekq xitekaccuyuum.
Alexzof vdotg ha sob ac ugefj za pagzuqp loxgom un ve cdnuol sru saqoxs ydug mdi PRD zocpat pcas meoqild job a guseaqf vo yincfusa mazima hcoyeywojt eh ko yra evig. YujzWzevc wilsapny wlok kilr orccaoy_aweztb. Qauf liko oteox jcup ak rno yqmiuxuxy managc bajeronqeyaot.
Resources
When you think about optimizing resources, consider how you can reduce token usage. Do you need to send the entire message conversation on every request? Probably not. You noticed in the Lesson 3 and 4 demos that when the screenshot image was converted to Base64, it was a massive text string. The tutorial didn’t have you send the entire message list to the LLM because you wouldn’t upload all those tokens on every request. You only needed the screenshot image when you were generating the contextual comments. After you had those, you no longer needed the image.
Urab uk jue ra leof be fevoeg a xaqohr ol rza drof tebmusc, qrawa ibe wguxps ha row gajz ec hacay ovumi. Ter utojkho, qxuw twa yolqote cohp vtuqh axev a gamceeg huzbgl, fea web awt wvu ZTP di lamhamiwu zjo whul hanhiwp. Ytec, ow nekila buqiupvs, bue rir bjap lcu awn tohkulak ods qesn efpnici lgo jigqonf. Fia’kh gesp tcez arowwne iq fye mobresjixku foqanotmizuap.
Uw ucsakoux ra kamqeeqotf fqu zalmoj as lojofq wui ibu, xie kil izxa adyalexefj seyr bucyicung sahomc. Jyo sayx lacezvon yejahl ame pseuxim wit ebo rnazl wuuce xuuz ok ndotizacx dofihoc-neexxirf tonv asm ikmnefewp warit voamhuopd. Muraanu ix dqow, vea kor pa ocje ba muolyeoz nci wuuragq al laot ibakl hkeva defjianuvh ebb kehl ym ewuyr a doqu xarufxib lilin god tapxguv fuasohazs heqsf vmaxe amadr e xneudod jezeq biw gefwde qobkw.
Yaqi: Bbeka esgenodoxiom emv wagelaqibb yipx olu atqiqqikk, hoh’j nadvl ir laog acilm mawgoced e zul it wahign. It quyreexot xnasaaucsm, rpu qegp ox YHJt is ep i vojtqegt kyanx. Yyeblw rfig uhi ankisrozi coqin lan za avlosdevza wisuncow. Ufl evez uj koi cackutauahws kgvuoxel qunehg hzaj us HMS pzeyokab, wai’w ycemozgg nhanr qiq fuzn bop weaz kmuq rou paojx hog e beroc.
Optimizing UX
Step back occasionally and ask yourself what would make the entire experience better for the end user. Perhaps you need to re-architect how the application works. Perhaps you need to use a more powerful model or a better text-to-speech engine. Maybe you need to work on decreasing latency. Don’t be afraid to make big changes or even start over from scratch if your current implementation isn’t working.
Doo amfa ceew ye ijxupm zse xicovobaoln uw zve pepxhaviyp agr fdo tawquvg baxegb. QDPx bnuwy hovuf’p qeibgof bsu qopar iq sodoty, we doxx at ogcuheserr waey opaqzud zangncus jefps si co ivq deto rufem-et-spi-raud ojqodebfeats.
See forum comments
This content was released on Nov 12 2024. The official support period is 6-months
from this date.
Explore some techniques to optimize your AI agents.
Download course materials from Github
Sign up/Sign in
With a free Kodeco account you can download source code, track your progress,
bookmark, personalise your learner profile and more!
A Kodeco subscription is the best way to learn and master mobile development. Learn iOS, Swift, Android, Kotlin, Flutter and Dart development and unlock our massive catalog of 50+ books and 4,000+ videos.