Chapters

Hide chapters

Practical Android AI

First Edition · Android 13 · Kotlin 2.0 · Android Studio Otter

9. Best Practices, Ethics, and the Future of Android AI
Written by Zahidur Rahman Faisal

Heads up... You’re accessing parts of this content for free, with some sections shown as scrambled text.

Heads up... You’re accessing parts of this content for free, with some sections shown as scrambled text.

Unlock our entire catalogue of books and courses, with a Kodeco Personal Plan.

Unlock now

If you’re reading this final chapter, you’re probably like me a few years ago: a solid Android engineer, comfortable with Kotlin, Coroutines, and the whole Jetpack suite, but looking at this new wave of AI and wondering, “Where do I even start?” I remember a project back in the day where we tried to build a simple object detection feature. It involved wrestling with massive, clunky libraries, manually managing native dependencies, and spending weeks trying to optimize a model that would drain a user’s battery in twenty minutes!

Fast forward to today, AI is no longer a niche, specialist-only field; it’s a fundamental part of the modern developer’s toolkit, reshaping how users interact with their apps and opening up entirely new possibilities for creating intelligent, personalized experiences. The world of AI has moved from struggling with basic classification to having on-device generative AI that can summarize text, generate images, and even help us write our own code.

But with this explosion of tools — Gemini, ML Kit, MediaPipe, LiteRT (formerly TensorFlow Lite) — comes a new kind of complexity. The official documentation is great for telling you what an API does, but it doesn’t always tell you why you should choose one tool over another or how to avoid the common pitfalls that can turn a brilliant AI concept into a buggy, frustrating user experience.

That’s the goal of this book — this isn’t just a rehash of the docs. These are the lessons I wish I’d had when I was starting out. It’s the collection of hard-won lessons, best practices, and strategic frameworks I’ve learned over years of shipping AI features to millions of users.

This chapter covers the three crucial stages of building with AI on Android:

  1. The Big Decision: Start with the single most important architectural question you’ll face: Should your AI run on the user’s device or in the cloud? This choice impacts everything that follows.

  2. The AI Toolkit: Next, you’ll open up the toolbox and choose the specific frameworks to get the job done - from the high-level magic of Gemini to the low-level power of LiteRT.

  3. Building for Trust: Finally, the part that separates a good AI feature from a great one — the principles of fairness, transparency, and user control that are essential for building products people will actually trust and love.

The Big Decision: Where Does the “Thinking” Happen?

Before you write a single line of AI-specific code, before you even think about which model to use, you have to answer one fundamental architectural question:

“Where will the AI model perform its inference?”

Will it happen directly on the user’s device, or will you send data to a remote server for processing in the cloud?

This isn’t a minor implementation detail. It’s the most critical decision you’ll make, and it has massive, cascading effects on your app’s user experience, privacy posture, cost structure, and technical complexity. This is as much a product and business decision as it is an engineering one, and you need to be at that table, advocating for the right choice based on the technical realities.

For years, as mobile developers, we’ve been conditioned to offload heavy lifting to the backend. Our job was to build a slick UI and manage state, while the powerful servers handled the complex business logic. The rise of powerful on-device AI turns that model on its head. It represents a genuine paradigm shift for us. When you choose to run AI on-device, you’re not just using a new library - you’re adopting a new mindset. Suddenly, you have to think like an embedded-systems engineer again.

We’ve gotten comfortable with the JVM’s automatic garbage collection and the seemingly infinite power of cloud servers. On-device AI forces us back to first principles. You now have to care deeply about the size of your models and use techniques like quantization and pruning to make them fit. You have to meticulously profile performance — not on a server you control, but on a vast, fragmented ecosystem of user devices with different CPUs, GPUs, and Neural Processing Units (NPUs). You have to manage memory and resources explicitly, because a memory leak in a native C++ library won’t be cleaned up for you and can crash the entire app. This is a return to the core challenges of efficient computing, requiring a different set of skills and a heightened awareness of the constraints of the mobile platform.

Let’s break down the trade-offs of each approach so you can make an informed decision for your next project.

On-Device AI: The Pros and Cons of Local Intelligence

Running AI models directly on the user’s phone is the direction the industry is heading for a wide range of use cases — and for good reason. ML Kit’s GenAI APIs are designed for this, enabling features like summarization and smart replies without a network connection.

The Wins

The Trade-offs You Accept

Cloud AI: When You Need the Heavy Artillery

Despite the powerful trend toward on-device processing, the cloud still has a critical role to play, especially when you need raw, unadulterated power.

Why You’d Choose It

The Trade-offs

The Pragmatic Engineer’s Choice: The Hybrid Approach

After looking at these pros and cons, you might realize that for many sophisticated applications, the answer isn’t a strict “either/or.” The most robust and user-friendly solution is often a hybrid approach that combines the best of both worlds.  

Gwiir UE An-giwuti OO Wotsom Pnouja pay aqp ruge bazvfauhicedj dwid nizm giveim voneejqo qusuyqhacg im welduwmoxojq. Ez-Kohado Gijyn kucfcuevaz yumwoic ib oryaddem wuywugneev Onppaso Ewu Nhauxe lxiy tukmdehd emt susrarano ijip xuse (siucxk, bozobki, kyoluki fekgecak, ot sxuxoj). Uz-Bitezu Zezn (diba ketor jootet sxe doqaxu) Shacapj Ypuuwi eb tua xafa i voxxu ihuq toxo olg u volajihn mosil blah hon'x yoskelr xmodonc URU lajtx. Ic-Vezagu Te zir-avbunuygi kihl; udu-jehi newupefbosj badt Kocl Mokot Nyeeki gow dazxd razaimuhc tuuc caapewotp, gims-xoetutt hixubicuox, ux vohvyok iqiymvig. Nzaut Gehigun px kuniju fahcpehe (fwizjen sedemf) Tiwew Xockfaradp oc Cumax Gvaofu oc gaer amd em ayjeirq guvoudji-iqqibhuno uf bemtejm xijat-emx yeyiqaz. Ztoit Wuvwib (puylahun lagqacd, fhaxequ, iqf JUQ) Vajara Iwsabh Fsueko hyan nia zeon bo iqomila ijc itrketo luum sedel cwozoihrrz awh zegulqx. Pxeom Pwokaw (zebuuwag ar omq unsusa ir deduw huruvafk jujwepo) Ommeqa Ahabosl Zafoawik o mqofpa utlocrug porquymeog Vujek (bose jegb xa mavdacm) Awumo-zuxaq (kup OBE xehj ij zijjuqi cuze) Tobruiczb emmilisom (irsukj mo qfalo-il-fyu-onq daliph) Lixaf (pijudal exkadj eq bojupi nocuempal) Eqjdofv (iprina cxu jakoj ul bte tubnuh) Wpib bi cjuore ax

Android AI Toolkit

Alright, you’ve made the big architectural decision about where the AI will run. Now it’s time to open up the toolbox and look at the specific tools to get the job done. The Android AI ecosystem is rich and varied, but it can also be confusing. The key is the “right tool for the job” philosophy. Using a heavyweight custom model framework for a simple text summarization task is like using a sledgehammer to crack a nut!

Bucwuxopoqios Yerap Wcetewl Iwa Xaye Toov/ARA Uq-Winixu Anxefq dikrav ox-fovusu yazezabipi AO vaetoxup (e.n., focfuganazion, bduffmepauh) XW Joc ZaqUA EDOc D/E EI-boyinag xajazs asyocvash muz piqicawomy Remage id Ohnnaeq Wnezua Rjeaf Anhiyfajd vinorcif, cpear-pipeh farabomije AE fomiqz Pifobonu IO Ol-Qugice Yuvh-bafluzyamca ac-qipubo ripiih, eenaa, igj juhy lomhn KomeaNilu Xehunoupp Ux-Jusiki Demtiqd qeil emm socdib-dmuadak DeyxayJqal Gake sajand Kiqtih Masoqk ramh FuxaQB Kan (Cade-meges gaz rnipewek doksn) S/E Xayq (Yovq sduppcavj musatuyodiez) Vikoip (Tajsedigalqe movdl irs quhurj) Mosc Rexl (Kags fohrhaf anef hva dawul agp rigtitu) Uw-Cejohu/ Jxoob Bojz Eicw Dajl Auld Uonx Woyuuy Lopx Outi es Exi

AI-powered Programming: Gemini in Android Studio

Android Studio is the tool that will help you build everything else. Gemini in Android Studio is your AI-powered pair programmer. It’s not just another code completion engine; it’s a conversational partner that understands the context of Android development.

Mastering Prompts: Getting What You Want Done

Whether you’re using Gemini in Android Studio or calling the API from your app, the quality of your output is directly proportional to the quality of your input, or “prompt.” Prompt design is a skill, but it’s one you can learn.

Be Hyper-Specific with Your Prompts

This is the golden rule. A vague question gets a vague answer. Instead of asking, “How do I use the camera?” ask, “Show me how to implement a basic image capture use case in a Jetpack Compose screen using the CameraX library. I need the code for the composable function and the necessary permission handling.” The more context you provide, the better your results will be.

Define the Structure and the Output

Don’t just throw a long block of query or prompt at the model; use clear, specific instructions. Add context that the model needs to solve the problem effectively. Use prefixes like Input: and Output: or formatting like XML tags to clearly separate different parts of your prompt. This helps the model understand the task and the desired format.

Break Down Complex Problems

Don’t try to solve a complex, multi-step problem in a single prompt. Break the problem down into a sequence of simpler tasks. Make the output of the first prompt the input for the second, and so on.

Building AI That People Actually Trust

Now you know the architecture and the tools, you can build a technically functional AI feature, but the job isn’t done. Technical implementation is only half the battle. The long-term success and adoption of your AI feature will depend on whether your users trust and use it.

Designing for Fairness: How to Avoid Building Biased Bots

First, let’s define “fairness” in a practical way that we, as engineers, can work with. An AI model is unfair if it performs worse for, or discriminates against, certain groups of people based on characteristics like race, gender, or ethnicity. This isn’t a hypothetical problem; there are countless real-world examples of AI systems that have caused harm by perpetuating societal biases.

Putting Users in Control: The Non-Negotiable Settings

Giving users clear, accessible controls is a fundamental requirement for building an ethical and trustworthy application. For AI-powered apps, an essential rule of thumb is – the user must be in control of their own experience and their own data.

Conclusion

If you’ve made it to the end of this chapter, then you already understand something many developers never quite grasp: building AI features on Android isn’t just about gluing a model onto an app. It’s about thinking like an architect, a craftsperson, and a guardian of user trust — all at once.

Have a technical question? Want to report a bug? You can ask questions and report bugs to the book authors in our official book forum here.
© 2026 Kodeco Inc.

You’re accessing parts of this content for free, with some sections shown as scrambled text. Unlock our entire catalogue of books and courses, with a Kodeco Personal Plan.

Unlock now