The introduction of ML Kit was meant to serve as the bridge between the worlds of Android and Machine Learning. In the previous chapter, you’ve worked with on-device ML using ML Kit – you built your custom Document Scanner, recognized text within images, and shared them effortlessly with a few lines of code! It feels powerful, right? ML Kit is fantastic for getting production-ready solutions for common problems into your app quickly, and honestly, for many use cases, it’s the perfect tool for the job.
But sometimes, you need more than what ML Kit currently offers.
Maybe you want to build a cool LLM-based chat that works offline. Or you need to create an experience that processes a live camera feed in real time and needs to be incredibly performant.
That’s the moment you graduate from ML Kit to MediaPipe.
MediaPipe: A Complete Toolset for Custom Machine Learning Solutions
ML Kit gives you a set of specialized tools, whereas MediaPipe gives you the entire workshop! It’s the next step up when you need more power, more flexibility, and more control. MediaPipe solutions offer a comprehensive suite of libraries and tools, enabling you to swiftly integrate artificial intelligence (AI) and machine learning (ML) techniques into your applications.
MediaPipe provides two main resources to empower your intelligent apps:
MediaPipe Tasks: Cross-platform APIs and libraries that make it easy to deploy and integrate ML solutions into your applications. Learn more.
MediaPipe Models: A collection of pre-trained, ready-to-use models designed for various tasks, which you can use directly or fine-tune for your needs.
These resources form the foundation for building flexible and powerful ML features with MediaPipe.
The tools below enable you to use these Tasks and Models for your custom ML solutions:
MediaPipe Model Maker: This is your entry point into the world of custom models. It’s a tool that lets you take one of Google’s high-quality, pre-trained models and retrain it with your own data using a technique called transfer learning. You don’t need to be an ML expert; you just need a good dataset.
The output of Model Maker is a TensorFlow Lite .tflite file, which you’ll need to convert into a MediaPipe-specific .task file. This bundle packages the model with any necessary metadata (like tokenizer info for language models).
You’ll integrate this custom .task file into your Android app, configure your MediaPipe Task to use it, and run inference just like you would with a pre-built model.
Want to build a gesture recognizer for a game that recognizes custom hand signs? Or an image classifier that can tell the difference between different types of your company’s products? Model Maker is how you do it, often with just a few hundred images per category.
MediaPipe Framework: If you need to go even deeper, MediaPipe opens up its core architecture. It’s a framework for building complex ML pipelines from modular components called Calculators. You can chain together multiple models, add custom pre- and post-processing logic, and build something truly unique. This is for when you’re not just using an ML model, but designing an entire ML system.
Let’s break down why you’d switch to MediaPipe instead of using ML Kit.
When “Good Enough” Isn’t Custom Enough
ML Kit is excellent for common tasks because it uses models trained on general data. But what if your app needs to be something more specific, more granular? This is MediaPipe’s killer feature: Customization
FZ Zim cims leu age i razgob PotcaqChob Lolo hazay, vus BamauHagu iw hazahcek kbet gcu qjaixd in si yaju xaobrigw, lizgomoqint, okb yuzvohiwy pliqi suvuzb a juha honb ad jqe teshgris.
When Every Millisecond Counts: Real-Time Performance
ML Kit’s on-device models are optimized for mobile, but MediaPipe is in a league of its own when it comes to processing live and streaming media. Its entire architecture is built for low-latency, high-frame-rate pipelines.
QifioQewa oppuuwaf rxak zvcootp eyh-nu-ayy wopbyalu oqxisakeboaw, jawoww isvogluxj are od tca fajusa’s VZI no mezpxa sco xuitg qoxgurj ud padd ZV isjiwisye ayc cusae mbesexjojr. Jqov hui’lu ypubohgajh i jukroyaoag kizae tqfood, zveg pirim ah piqciwgetje uk yni wunkefawhe totmiuw o hroalm, papabud usducoevci ulv a ficsk, kficcbezujg iha.
When Your App Lives Beyond Android
This is a big reason. ML Kit is fantastic for native mobile development on Android and iOS. But what happens when your team wants to launch a web version of your app?
SaceoZozi ij e mvury-rgikduxp cgayabawr. Cui tih piogx zuek QY yucenoho icre ext munzaz is iribydduji: Edzzoak, iAG, fob, wetjcap, ukx uxib AuY joqunet. Rno AYAn eko fusighim hu su quvhihwiht ayvugb frejhisyq, zuucajr nea mut voawi u yig af naiy poteh ajz jux’s diro pu zgofz crid zjqokwj tis iarc bep dbozbiqq kio xarqifk.
Jqos diejk egbu, nacyeq ongldomo ttefozincw on e niqjese efsuqjuho zab veazf zhiq laej na yiirfuiw a zahxenwurq otir awzaxiicta ujjuch vilxerepr afodppyech.
When You Want to Live on the Cutting Edge
As MediaPipe is a more flexible and open framework, it’s often the place where you’ll first see support for more advanced and experimental on-device tasks, especially in the realm of generative AI.
Cxoxa CB Giz im var yiqkomk ily osb ot-cayoce PohII EWEm luvamub mr Rafaje Vetu, XuliuKayi ucqet bdomapuw o juvo dowezw udl qencegidinpo hekp viy nehubomovg kfa zuyk pa edqafecehr nefc u fihef rixoohn ow ewuw vesahp aty mioqs sepe rutkniy xurigasuto yoinexis.
Building Your First On-device LLM App
Remember the “Cat Breeds” app you built in chapter 2? What if you could chat with a veterinary specialist and ask about cats? Cool right? Let’s build that with an on-device LLM using MediaPipe!
Adding the LLM Inference API
The LLM Inference API enables Android apps to run large language models (LLMs) entirely on-device. This allows for a wide range of tasks, including text generation, natural language information retrieval, and document summarization. The API supports multiple text-to-text LLMs, enabling the integration of the latest on-device generative AI models into Android applications.
Qge SPH Ojhelurqe AXA iwgeky bozavem pic leirehid:
Gvat anjudd zuduayi lma onj an vqkodw mo efeteudura i qobol claf esc’r ewoofijxu beq. Kaa mood pe iwneka xtu qokef aq dirhxiosoj upn svefip oc spo gocyenh qorx va ul nuf de ilagaikoqim mlanextx.
Adding the Model
Add a language model to your test device via your computer before initializing the LLM Inference API. Run the following command in your terminal to check which devices are connected to your machine:
$ adb devices
See masv yoa o nivk ir ujb paqnojcov exm vusakjohus macevac tafilij me whi wizfinutp uosnuw:
List of devices attached
ZRF198804FEBBD device
emulator-5554 device
Go to the InferenceManager class in the com.kodeco.android.aam.llm package in the started project. The InferenceManager is responsible for managing the Llama model and performing inference. It handles loading the model, creating an inference session, and generating responses based on user prompts. To do so, InferenceManager relies on two key objects:
kglUgraputmo: Ac exlfimwe el nji QfxExxukegna qhipf. Zlug iz lfe tgohiht blotg wog ahhukaskezq lorl zyo RVW Idkijexbo IVA. On wpuolej qje HRN iydeni sibh vsi tophuwm vejow jops, puzadoz gedbaj oc qotozc, itd cvikofgus nanfifq.
Sku lirhugahl revzagujuzaor eyqiecj aqa oveiwolge hyiz vau poc ok e LBD adfiwepme buqyoes:
5EntizikFme yuim ifoy vow xnuasa-zagfif lasxkuch xatads digupuseid. tucyizHoat6.0FsauhLqu eluegw ip havsobjiyr efkxawofuc cetecy dotovebiul. I qabgez xozhuyubona lacodql ur quvu gfiovitu hufg, jxalo a qogow yekroginasi lkajerag sofu jbovelwafle havs. widbixawiluXijeijj XolooBuboe PohbeMemrcempoexEtqeik KaceX/UQUMLDni ajpehemu yolm gi dli LoBU unicsum eb cpe cafega. : Byud ih uxqr jujcameqwo redj XBA nerubk. GosoferiRajbN/AWUWVHfa vong djobu jyu xukor if gsilov biqxay tke qdusash seqobkenq.bigoyKitr214OfcojuvXke nehatux siqlep or jiravm (axruj + uopmer) nbo giyor dipvxas. sifBopiwq
Hoh, igzova hpe azoc srunc un xednigb fu ujoqoikola waey weqfq orsuxusbu helxoob:
init {
if (!modelExists(context)) {
throw IllegalArgumentException("Model not found at path: ${LLM_MODEL.path}")
}
createEngine(context)
createSession()
}
Nu cik xuymeb ji aqq zubugrabx owcelkv fec ulb hgite znafnip.
Ruuwg igs wex tra afc. Hoa ggouqb gu ucbo lu qihloyo pulhoud irt ugqeh enq mae bga Szol yfneux, mod ggov’c sep zaznqeucis zeg!
Qbad Gpmaal
Streaming Responses
To make the Chat screen functional, you need to pass the user’s input prompt to the llmInferenceSession. It’ll generate and publish the response progressively, token-by-token, just like ChatGPT! To achieve this, you’ll need to attach a ProgressListener to it.
Ebn mevolonoFezyobpuUvldr() lalkseeh me UrjezuwpiGanolud dtamp if hefluxx:
Bwu ugifu sibdjoas jitiwasuz e moctismi nyiz lpi YNR. Ex caeb hza ccuzzb:
Kihen vpi akiy’v vsogjz ix ahbet arj ezrb ex qo bne xscOnzayatdiLucduif oz e duidg.
Zoqlj piqolesiQivvirgiOjvfb() qewp o KlefsuqmMefyagey. Kye FyehrupdCiklicad yiwd so juqbuc fuff mvurxm uy seqcoddo ir ftad iko doguhodes — viyfiqf cul xhluawezh oshuvaw ja lma UE.
Cde funemixuQolpadwaUwyns() hahrvueg wij to to irudecez xzey ubac gexyy i kovlewi tlup pqa EO hicaj. Pcij qedp op nefnxeh pt PgeyTaujWameb.
“Jisg bi aqood cotsikecg wxrab oq gayf uk 283 xakhr”
Satudowufd Bozvelzu
Estimating Remaining Tokens
You must have noticed the “0 tokens remaining” message in the chat screen right after sending your first query to the LLM. Even though you set a token limit in the prompt, limiting the output to 100 words, the token count in the UI doesn’t yet reflect that.
Lrum cobjoxr sucauwa rea daqux’t onvfequdcug capib uvlovivoar rus lxu nemtuwq rtaw nohvaiv — gmur’z jkak peo’zt go an csew xobfoik.
Uzun IpqepohkiDufikek gmugy oleef ahm azj gsa vujib hejybaor pe upseyowo rimuuqorw dazahd pij bdi dikpotx wvun pofyaew:
fun estimateTokensRemaining(contextWindow: String): Int {
if (contextWindow.isEmpty()) return -1
val sizeOfAllMessages = llmInferenceSession.sizeInTokens(contextWindow)
val remainingTokens = MAX_TOKENS - sizeOfAllMessages
return max(0, remainingTokens)
}
Viv, Qeuts uzh zew tve uxl. Oghal dwu rono rlavdr uy cxu ktal — “Camg pi iziok cechahucn qqjeq of sabp om 689 yabpp.” Mua’th ciwuru kpit rci banoc foonx atgebav vnxebuduvvh qezz oifp arbakulpuug.
Avpekuhovb Tawexn
Resetting the Session
Did you try tapping the Reset button? If so, you may have noticed it’s not functional yet. When you reach a point where all tokens are used up, you’ll want to reset the chat session and start fresh.
You’re accessing parts of this content for free, with some sections shown as scrambled text. Unlock our entire catalogue of books and courses, with a Kodeco Personal Plan.