After soaking in everything shared at GoogleIO, I can’t lie – I feel supercharged! From What’s New in Flutter to Building Agentic Apps with Flutter and Firebase AI Logic, and the deep dive into How Flutter Makes the Most of Your Platforms, it felt like plugging directly into the Matrix of dev power.
But the absolute showstopper for me? David’s presentation using Firebase Studio and Builder.io was a masterpiece. I’ve already checked it out, and it’s every bit as awesome as it looked. Pair that with everything Gemini is shipping… and wow. We’re entering a whole new era of app development.
Artificial Intelligence (AI) is no longer a futuristic concept – it’s an integral part of our daily lives, transforming how we interact with technology and the world around us.
From personalized recommendations on streaming platforms to intelligent assistants that manage our schedules, AI’s applications are vast and ever-expanding. Its ability to process massive datasets, identify patterns, and make informed decisions is revolutionizing industries from healthcare to finance…and now, even cooking!
At the forefront of this AI revolution are powerful platforms like Google’s Vertex AI and Gemini. Vertex AI is a unified machine learning platform that lets you build, deploy, and scale ML models faster and more efficiently. It provides a comprehensive suite of tools for the entire ML workflow, from data preparation to model deployment and monitoring. Think of it as your all-in-one workshop for crafting intelligent systems.
Gemini, on the other hand, is Google’s most capable and flexible AI model. It’s a multimodal large language model (LLM), meaning it can understand and process information across various modalities – text, images, audio, and more. This makes Gemini incredibly versatile, enabling it to handle complex tasks that require a nuanced understanding of different types of data. For developers, Gemini opens up a world of possibilities for creating highly intelligent and intuitive applications.
Complementing these powerful AI models is Firebase AI Studio, a suite of tools within Firebase designed to simplify the integration of AI capabilities into your applications. It streamlines the process of connecting your app to Gemini models, making it easier to leverage the power of generative AI without getting bogged down in complex infrastructure.
Building an AI-Powered Cooking Assistant with Flutter and Gemini
In this article, I’ll demonstrate how I leveraged the combined power of Gemini and Flutter to build an AI-powered cooking assistant.
Fueled by a recent burst of culinary curiosity, I decided to try building an app (Snap2Chef) that could identify any food item from a photo or voice command, provide a detailed recipe, give step-by-step cooking instructions, and even link me to a relevant YouTube video for visual guidance.
Whether I’m exploring new dishes or trying to whip up a meal with what I have on hand, this app powered by Gemini makes the cooking experience smarter and more accessible.
Prerequisites
To make the most of this guide, ensure you have the following prerequisites in place (not mandatory):
Flutter Development Environment: You should have a working Flutter development setup, including the Flutter SDK, a compatible IDE (like VS Code or Android Studio), and configured emulators or physical devices for testing.
Basic to Intermediate Flutter Knowledge: Familiarity with Flutter’s widget tree, state management (for example,
StatefulWidget
,setState
), asynchronous programming (Future
,async/await
), and handling user input is essential.Google Cloud Project and API Key: You’ll need an active Google Cloud project with the Vertex AI API and Gemini API enabled. Ensure you have an API key generated and ready to use. While we’ll use it directly in the app for demonstration, for production applications, it’s highly recommended to use a secure backend to proxy your requests to Google’s APIs.
Basic Understanding of REST APIs: Knowing how HTTP requests (GET, POST) and JSON data work will be beneficial, though the
google_generative_ai
package abstracts much of this.Assets Configuration: If you’re using a local placeholder image (
placeholder.png
inassets/images/
), ensure yourpubspec.yaml
file is correctly configured to include this asset.
Here’s what we’ll cover:
How to Get Your Gemini API Key
To use the Gemini model, you’ll need an API key. You can obtain one by following these steps:
Go to Google AI Studio.
Sign in with your Google account.
Click on “Get API key” or “Create API key in new project.”
Copy the generated API key.
Important Security Note:
In the provided HomeScreen code, the API key is directly embedded as String apiKey = “”;. This is not a secure practice for production applications. Hardcoding API keys directly into your client-side code (like a Flutter app) exposes them to reverse engineering and potential misuse.
To secure your API keys in a Flutter application, I highly recommend referring to my article: How to Secure Mobile APIs in Flutter. This article covers various best practices, including:
Using environment variables or build configurations.
Storing keys in secure local storage (though still client-side).
Proxying API requests through a backend server to truly hide your API key.
Using Firebase Extensions or Cloud Functions for server-side logic that interacts with AI models, without exposing the key to the client.
For this tutorial, we’ll keep it simple, but always prioritize API security in your real-world projects!
Set Up Your Flutter Project and Dependencies
To begin, let’s create a new Flutter project and set up the necessary dependencies in your pubspec.yaml
file.
First, create a new Flutter project by running:
flutter create snap2chef
<span class="hljs-built_in">cd</span> snap2chef
Now, open pubspec.yaml
and add the following dependencies:
<span class="hljs-attr">dependencies:</span>
<span class="hljs-attr">flutter:</span>
<span class="hljs-attr">sdk:</span> <span class="hljs-string">flutter</span>
<span class="hljs-attr">google_generative_ai:</span> <span class="hljs-string">^0.4.7</span>
<span class="hljs-attr">permission_handler:</span> <span class="hljs-string">^12.0.0+1</span>
<span class="hljs-attr">file_picker:</span> <span class="hljs-string">^10.1.9</span>
<span class="hljs-attr">image_cropper:</span> <span class="hljs-string">^9.1.0</span>
<span class="hljs-attr">image_picker:</span> <span class="hljs-string">^1.1.2</span>
<span class="hljs-attr">path_provider:</span> <span class="hljs-string">^2.1.5</span>
<span class="hljs-attr">fluttertoast:</span> <span class="hljs-string">^8.2.12</span>
<span class="hljs-attr">gap:</span> <span class="hljs-string">^3.0.1</span>
<span class="hljs-attr">iconsax:</span> <span class="hljs-string">^0.0.8</span>
<span class="hljs-attr">dotted_border:</span> <span class="hljs-string">^2.1.0</span>
<span class="hljs-attr">youtube_player_flutter:</span> <span class="hljs-string">^9.1.1</span>
<span class="hljs-attr">flutter_markdown:</span> <span class="hljs-string">^0.7.7+1</span>
<span class="hljs-attr">loader_overlay:</span> <span class="hljs-string">^5.0.0</span>
<span class="hljs-attr">flutter_spinkit:</span> <span class="hljs-string">^5.2.1</span>
<span class="hljs-attr">cached_network_image:</span> <span class="hljs-string">^3.4.1</span>
<span class="hljs-attr">flutter_native_splash:</span> <span class="hljs-string">^2.4.4</span>
<span class="hljs-attr">flutter_launcher_icons:</span> <span class="hljs-string">^0.14.3</span>
<span class="hljs-attr">speech_to_text:</span> <span class="hljs-string">^7.0.0</span>
<span class="hljs-attr">dev_dependencies:</span>
<span class="hljs-attr">flutter_test:</span>
<span class="hljs-attr">sdk:</span> <span class="hljs-string">flutter</span>
<span class="hljs-attr">flutter_lints:</span> <span class="hljs-string">^5.0.0</span>
<span class="hljs-attr">build_runner:</span> <span class="hljs-string">^2.4.13</span>
After adding the dependencies, run flutter pub get
in your terminal to fetch them:
flutter pub get
Project Structure
We’ll organize our project into three main folders (with various subfolders) to maintain a clean and scalable architecture:
core
: Contains core functionalities, utilities, and shared components.infrastructure
: Manages external services, data handling, and business logic.presentation
: Houses the UI layer, including screens, widgets, and components.main.dart
: The entry point of our Flutter application.
Let’s dive into the details of each folder.
1. The core
Folder
The core
folder will contain extensions
, constants
, and shared
utilities.
The extensions
Folder
This directory will hold extension methods that add new functionalities to existing classes.
format_to_mb.dart
:
<span class="hljs-keyword">extension</span> ByTeToMegaByte <span class="hljs-keyword">on</span> <span class="hljs-built_in">int</span> {
<span class="hljs-built_in">int</span> formatToMegaByte() {
<span class="hljs-built_in">int</span> bytes = <span class="hljs-keyword">this</span>;
<span class="hljs-keyword">return</span> (bytes / (<span class="hljs-number">1024</span> * <span class="hljs-number">1024</span>)).ceil();
}
}
This extension on the int type (integers) provides a convenient method formatToMegaByte()
. When called on an integer representing bytes, it converts that byte value into megabytes. The division by (1024 * 1024)
converts bytes to megabytes, and .ceil()
rounds the result up to the nearest whole number. This is useful for displaying file sizes in a more human-readable format.
loading.dart
:
<span class="hljs-keyword">import</span> <span class="hljs-string">'package:flutter/material.dart'</span>;
<span class="hljs-keyword">import</span> <span class="hljs-string">'package:loader_overlay/loader_overlay.dart'</span>;
<span class="hljs-keyword">extension</span> LoaderOverlayExtension <span class="hljs-keyword">on</span> BuildContext {
<span class="hljs-keyword">void</span> showLoader() {
loaderOverlay.<span class="hljs-keyword">show</span>();
}
<span class="hljs-keyword">void</span> hideLoader() {
loaderOverlay.<span class="hljs-keyword">hide</span>();
}
}
This extension on BuildContext
simplifies the process of showing and hiding a global loading overlay in your Flutter application. It leverages the loader_overlay package.
showLoader()
: CallsloaderOverlay.show()
to display the loading indicator.hideLoader()
: CallsloaderOverlay.hide()
to dismiss the loading indicator. These extensions make it easy to control the loader from any widget that has access to aBuildContext
.
to_file.dart
:
<span class="hljs-keyword">import</span> <span class="hljs-string">'dart:io'</span>;
<span class="hljs-keyword">import</span> <span class="hljs-string">'package:image_picker/image_picker.dart'</span>;
<span class="hljs-keyword">extension</span> ToFile <span class="hljs-keyword">on</span> Future<XFile?> {
Future<File?> toFile() => then((xFile) => xFile?.path).then(
(filePath) => filePath != <span class="hljs-keyword">null</span> ? File(filePath) : <span class="hljs-keyword">null</span>,
);
}
This extension is designed to convert an XFile object (typically obtained from the image_picker package) into a dart:io File object.
It operates on a
Future<XFile?>
, meaning it expects a future that might resolve to anXFile
ornull
.then((xFile) => xFile?.path)
: IfxFile
is not null, it extracts the file’s path. Otherwise, it passesnull
.then((filePath) => filePath != null ? File(filePath) : null)
: If afilePath
is available, it creates aFile
object from it. Otherwise, it returnsnull
. This is a concise way to handle the asynchronous conversion of a picked image or videoXFile
into aFile
object that can be used for further operations like displaying or uploading.
to_file2.dart
:
<span class="hljs-keyword">import</span> <span class="hljs-string">'dart:io'</span>;
<span class="hljs-keyword">import</span> <span class="hljs-string">'package:image_picker/image_picker.dart'</span>;
<span class="hljs-keyword">import</span> <span class="hljs-string">'package:path_provider/path_provider.dart'</span>;
<span class="hljs-keyword">extension</span> XFileExtension <span class="hljs-keyword">on</span> XFile {
Future<File> toFile() <span class="hljs-keyword">async</span> {
<span class="hljs-keyword">final</span> bytes = <span class="hljs-keyword">await</span> readAsBytes();
<span class="hljs-keyword">final</span> tempDir = <span class="hljs-keyword">await</span> getTemporaryDirectory();
<span class="hljs-keyword">final</span> tempFile = File(<span class="hljs-string">'<span class="hljs-subst">${tempDir.path}</span>/<span class="hljs-subst">${<span class="hljs-keyword">this</span>.name}</span>'</span>);
<span class="hljs-keyword">await</span> tempFile.writeAsBytes(bytes);
<span class="hljs-keyword">return</span> tempFile;
}
}
This extension on XFile provides a more robust way to convert an XFile to a dart:io file. This is particularly useful when you need to write the XFile’s content to a temporary location.
await readAsBytes()
: Reads the content of theXFile
as a list of bytes.final tempDir = await getTemporaryDirectory()
: Gets the path to the temporary directory on the device usingpath_provider
.final tempFile = File('${tempDir.path}/${this.name}')
: Creates a newFile
object in the temporary directory with the original name of theXFile
.await tempFile.writeAsBytes(bytes)
: Writes the bytes read from theXFile
into the newly created temporary file.return tempFile
: Returns the newly createdFile
object. This is particularly useful when you’re working withXFile
s that might not have a readily accessible file path on the device, or if you need to ensure the file is persistently available for further processing, such as cropping.
The constants
Folder
This directory will hold static values and enumerations used throughout the app.
enums/record_source.dart
:
<span class="hljs-keyword">enum</span> RecordSource { camera, gallery }
This is a simple enumeration (enum) named RecordSource
. It defines two possible values: camera
and gallery
. This enum is used to represent the source from which an image or video is picked, providing a clear and type-safe way to differentiate between capturing from the camera and selecting from the device’s gallery.
enums/status.dart
:
<span class="hljs-keyword">enum</span> Status { success, error }
This is another straightforward enumeration named Status
. It defines success
and error
as its possible values. This enum is commonly used to indicate the outcome of an operation or a process, providing a standardized way to convey status information, for example, for toast messages.
app_strings.dart
:
<span class="hljs-comment">// ignore_for_file: constant_identifier_names</span>
<span class="hljs-class"><span class="hljs-keyword">class</span> <span class="hljs-title">AppStrings</span> </span>{
<span class="hljs-keyword">static</span> <span class="hljs-keyword">const</span> <span class="hljs-built_in">String</span> AI_MODEL = <span class="hljs-string">'gemini-2.0-flash'</span>;
<span class="hljs-keyword">static</span> <span class="hljs-keyword">const</span> <span class="hljs-built_in">String</span> APP_SUBTITLE = <span class="hljs-string">"Capture a photo or use your voice to get step-by-step guidance on how to prepare your favorite dishes or snacks"</span>;
<span class="hljs-keyword">static</span> <span class="hljs-keyword">const</span> <span class="hljs-built_in">String</span> APP_TITLE = <span class="hljs-string">"Your Personal AI Recipe Guide"</span>;
<span class="hljs-keyword">static</span> <span class="hljs-keyword">const</span> <span class="hljs-built_in">String</span> AI_TEXT_PART = <span class="hljs-string">"You are a recipe ai expert. Generate a recipe based on this image, include recipe name, preparation steps, and a public YouTube video demonstrating the preparation step. Output the YouTube video URL on a new line prefixed with 'YouTube Video URL: ', it should be a https URL and the image URL on a new line prefixed with 'Image URL: ' and it should be a https URL too."</span>
<span class="hljs-string">"If the image is not a food, snacks or drink, politely inform the user that you can only answer recipe queries and ask them to close and upload a food/snack/drink image."</span>;
<span class="hljs-keyword">static</span> <span class="hljs-keyword">const</span> <span class="hljs-built_in">String</span> AI_AUDIO_PART =
<span class="hljs-string">"You are a recipe ai expert. Generate a recipe based on this text, include recipe name, preparation steps. I'd also love for you to show me any valid image online relating to this food/drink/snack and a public YouTube video demonstrating the preparation step.If the text doesn't contain things related to a food, snacks or drink, politely inform the user that you can only answer recipe queries and ask them to close and upload a food/snack/drink image. Output the YouTube video URL on a new line prefixed with 'YouTube Video URL: ', it should be a https URL and the image URL on a new line prefixed with 'Image URL: ' and it should be a https URL too, The text is: "</span>;
}
This class AppStrings
centralizes all the static string constants used throughout the application. This approach helps in managing strings effectively, making them easily modifiable and preventing typos.
AI_MODEL
: Specifies the Gemini model to be used, in this case,gemini-2.0-flash
.APP_SUBTITLE
andAPP_TITLE
: Define the main titles and subtitles for the app’s UI.AI_TEXT_PART
: This is a crucial string that serves as the prompt for the Gemini model when an image is provided. It instructs the AI to act as a recipe expert, generate a recipe including the name and steps, and provide a YouTube video. It also includes a fallback message if the image isn’t food-related.AI_AUDIO_PART
: Similar toAI_TEXT_PART
, but this prompt is used when audio input is provided. It also instructs the AI to generate a recipe, include a relevant online image, and a YouTube video, with specific formatting requirements for the URLs. This prompt will be concatenated with the transcribed text from the user’s voice input.
app_color.dart
:
<span class="hljs-keyword">import</span> <span class="hljs-string">'package:flutter/material.dart'</span>;
<span class="hljs-class"><span class="hljs-keyword">class</span> <span class="hljs-title">AppColors</span> </span>{
<span class="hljs-keyword">static</span> <span class="hljs-keyword">const</span> primaryColor = Color(<span class="hljs-number">0xFF7E57C2</span>);
<span class="hljs-keyword">static</span> <span class="hljs-keyword">const</span> litePrimary = Color(<span class="hljs-number">0xFFEDE7F6</span>);
<span class="hljs-keyword">static</span> Color errorColor = <span class="hljs-keyword">const</span> Color(<span class="hljs-number">0xFFEA5757</span>);
<span class="hljs-keyword">static</span> <span class="hljs-keyword">const</span> Color grey =
Color.fromARGB(<span class="hljs-number">255</span>, <span class="hljs-number">170</span>, <span class="hljs-number">170</span>, <span class="hljs-number">170</span>);
<span class="hljs-keyword">static</span> <span class="hljs-keyword">const</span> Color lighterGrey =
Color.fromARGB(<span class="hljs-number">255</span>, <span class="hljs-number">204</span>, <span class="hljs-number">204</span>, <span class="hljs-number">204</span>);
}
The AppColors
class centralizes all the custom color definitions used in the application. This makes it easy to maintain a consistent color scheme throughout the UI and allows for quick global changes to the app’s theme. Each static constant represents a specific color with its hexadecimal value or RGB value.
The shared
Folder
This directory will contain shared utility classes.
image_picker_helper.dart
:
<span class="hljs-keyword">import</span> <span class="hljs-string">'dart:developer'</span>;
<span class="hljs-keyword">import</span> <span class="hljs-string">'dart:io'</span>;
<span class="hljs-keyword">import</span> <span class="hljs-string">'package:file_picker/file_picker.dart'</span>;
<span class="hljs-keyword">import</span> <span class="hljs-string">'package:flutter/foundation.dart'</span> <span class="hljs-keyword">show</span> immutable;
<span class="hljs-keyword">import</span> <span class="hljs-string">'package:image_picker/image_picker.dart'</span>;
<span class="hljs-keyword">import</span> <span class="hljs-string">'package:permission_handler/permission_handler.dart'</span>;
<span class="hljs-keyword">import</span> <span class="hljs-string">'package:snap2chef/core/extensions/to_file.dart'</span>;
<span class="hljs-keyword">import</span> <span class="hljs-string">'package:snap2chef/core/extensions/to_file2.dart'</span>;
<span class="hljs-keyword">import</span> <span class="hljs-string">'../../presentation/components/toast_info.dart'</span>;
<span class="hljs-keyword">import</span> <span class="hljs-string">'../constants/enums/status.dart'</span>;
<span class="hljs-meta">@immutable</span>
<span class="hljs-class"><span class="hljs-keyword">class</span> <span class="hljs-title">ImagePickerHelper</span> </span>{
<span class="hljs-keyword">static</span> <span class="hljs-keyword">final</span> ImagePicker _imagePicker = ImagePicker();
<span class="hljs-keyword">static</span> Future<PickedFileWithInfo?> pickImageFromGallery2() <span class="hljs-keyword">async</span> {
<span class="hljs-keyword">final</span> isGranted = <span class="hljs-keyword">await</span> Permission.photos.isGranted;
<span class="hljs-keyword">if</span> (!isGranted) {
<span class="hljs-keyword">await</span> Permission.photos.request();
toastInfo(
msg: <span class="hljs-string">"You didn't allow access"</span>, status: Status.error);
}
<span class="hljs-keyword">final</span> pickedFile =
<span class="hljs-keyword">await</span> _imagePicker.pickImage(source: ImageSource.gallery);
<span class="hljs-keyword">if</span> (pickedFile != <span class="hljs-keyword">null</span>) {
<span class="hljs-keyword">final</span> file = <span class="hljs-keyword">await</span> pickedFile.toFile();
log(pickedFile.name.split(<span class="hljs-string">"."</span>).join(<span class="hljs-string">","</span>));
<span class="hljs-keyword">return</span> PickedFileWithInfo(file: file, fileName: pickedFile.name);
} <span class="hljs-keyword">else</span> {
<span class="hljs-keyword">return</span> <span class="hljs-keyword">null</span>;
}
}
<span class="hljs-keyword">static</span> Future<FilePickerResult?> pickFileFromGallery() =>
FilePicker.platform.pickFiles(
type: FileType.custom,
allowedExtensions: [<span class="hljs-string">"pdf"</span>, <span class="hljs-string">"doc"</span>, <span class="hljs-string">"docx"</span>, <span class="hljs-string">"png"</span>, <span class="hljs-string">"jpg"</span>, <span class="hljs-string">"jpeg"</span>]);
<span class="hljs-keyword">static</span> Future<File?> pickImageFromGallery() =>
_imagePicker.pickImage(source: ImageSource.gallery).toFile();
<span class="hljs-keyword">static</span> Future<File?> takePictureFromCamera() =>
_imagePicker.pickImage(source: ImageSource.camera).toFile();
<span class="hljs-keyword">static</span> Future<File?> pickVideoFromGallery() =>
_imagePicker.pickVideo(source: ImageSource.gallery).toFile();
<span class="hljs-keyword">static</span> Future<FilePickerResult?> pickSinglePDFFileFromGallery() =>
FilePicker.platform
.pickFiles(type: FileType.custom, allowedExtensions: [<span class="hljs-string">"pdf"</span>]);
}
<span class="hljs-class"><span class="hljs-keyword">class</span> <span class="hljs-title">PickedFileWithInfo</span> </span>{
<span class="hljs-keyword">final</span> File file;
<span class="hljs-keyword">final</span> <span class="hljs-built_in">String</span> fileName;
PickedFileWithInfo({<span class="hljs-keyword">required</span> <span class="hljs-keyword">this</span>.file, <span class="hljs-keyword">required</span> <span class="hljs-keyword">this</span>.fileName});
}
PlatformFile? file;
The ImagePickerHelper
class provides static methods for picking various types of files (images, videos, documents) from the device’s gallery or camera, with integrated permission handling.
_imagePicker
: An instance ofImagePicker
for interacting with the device’s image and video picking functionalities.pickImageFromGallery2()
:Permission handling: Checks if photo gallery permission is granted using
permission_handler
. If not, it requests the permission and displays a toast message if denied.Image picking: Uses
_imagePicker.pickImage(source: ImageSource.gallery)
to let the user select an image from the gallery.Conversion: If an image is picked, it converts the
XFile
to aFile
object using thetoFile()
extension.Logging: Logs the file name for debugging.
Return value: Returns a
PickedFileWithInfo
object containing theFile
andfileName
.
pickFileFromGallery()
: Usesfile_picker
to allow picking various file types (PDF, Doc, Docx, PNG, JPG, JPEG) from the gallery.pickImageFromGallery()
: A simpler method to pick an image from the gallery, directly returning aFuture<File?>
using thetoFile()
extension.takePictureFromCamera()
: Captures an image using the device’s camera and returns aFuture<File?>
.pickVideoFromGallery()
: Picks a video from the gallery and returns aFuture<File?>
.pickSinglePDFFileFromGallery()
: Specifically picks a single PDF file from the gallery.PickedFileWithInfo
class: A simple data class to hold both theFile
object and itsfileName
.
This helper class centralizes all file picking logic, making it reusable and easier to manage permissions and different picking scenarios.
2. The infrastructure
Folder
This folder handles the logic for interacting with external services and processing data.
image_upload_controller.dart
:
<span class="hljs-keyword">import</span> <span class="hljs-string">'dart:async'</span>;
<span class="hljs-keyword">import</span> <span class="hljs-string">'dart:io'</span>;
<span class="hljs-keyword">import</span> <span class="hljs-string">'package:flutter/material.dart'</span>;
<span class="hljs-keyword">import</span> <span class="hljs-string">'package:gap/gap.dart'</span>;
<span class="hljs-keyword">import</span> <span class="hljs-string">'package:iconsax/iconsax.dart'</span>;
<span class="hljs-keyword">import</span> <span class="hljs-string">'package:image_cropper/image_cropper.dart'</span>;
<span class="hljs-keyword">import</span> <span class="hljs-string">'../core/constants/app_colors.dart'</span>;
<span class="hljs-keyword">import</span> <span class="hljs-string">'../core/constants/enums/record_source.dart'</span>;
<span class="hljs-keyword">import</span> <span class="hljs-string">'../core/shared/image_picker_helper.dart'</span>;
<span class="hljs-keyword">import</span> <span class="hljs-string">'../presentation/widgets/image_picker_component.dart'</span>;
<span class="hljs-class"><span class="hljs-keyword">class</span> <span class="hljs-title">ImageUploadController</span> </span>{
<span class="hljs-comment">/// <span class="markdown">crop image</span></span>
<span class="hljs-keyword">static</span> Future<<span class="hljs-keyword">void</span>> _cropImage(
File? selectedFile,
<span class="hljs-built_in">Function</span> assignCroppedImage,
) <span class="hljs-keyword">async</span> {
<span class="hljs-keyword">if</span> (selectedFile != <span class="hljs-keyword">null</span>) {
<span class="hljs-keyword">final</span> croppedFile = <span class="hljs-keyword">await</span> ImageCropper().cropImage(
sourcePath: selectedFile.path,
compressFormat: ImageCompressFormat.jpg,
compressQuality: <span class="hljs-number">100</span>,
uiSettings: [
AndroidUiSettings(
toolbarTitle: <span class="hljs-string">'Crop Image'</span>,
toolbarColor: AppColors.primaryColor,
toolbarWidgetColor: Colors.white,
initAspectRatio: CropAspectRatioPreset.square,
lockAspectRatio: <span class="hljs-keyword">false</span>,
statusBarColor: AppColors.primaryColor,
activeControlsWidgetColor: AppColors.primaryColor,
aspectRatioPresets: [
CropAspectRatioPreset.original,
CropAspectRatioPreset.square,
CropAspectRatioPreset.ratio4x3,
CropAspectRatioPresetCustom(),
],
),
IOSUiSettings(
title: <span class="hljs-string">'Crop Image'</span>,
aspectRatioPresets: [
CropAspectRatioPreset.original,
CropAspectRatioPreset.square,
CropAspectRatioPreset.ratio4x3,
CropAspectRatioPresetCustom(),
],
),
],
);
assignCroppedImage(croppedFile);
}
}
<span class="hljs-comment">// /// pick image from camera and gallery</span>
<span class="hljs-keyword">static</span> <span class="hljs-keyword">void</span> imagePicker(
RecordSource recordSource,
Completer? completer,
BuildContext context,
<span class="hljs-built_in">Function</span> setFile,
<span class="hljs-built_in">Function</span> assignCroppedImage,
) <span class="hljs-keyword">async</span> {
<span class="hljs-keyword">if</span> (recordSource == RecordSource.gallery) {
<span class="hljs-keyword">final</span> pickedFile = <span class="hljs-keyword">await</span> ImagePickerHelper.pickImageFromGallery();
<span class="hljs-keyword">if</span> (pickedFile == <span class="hljs-keyword">null</span>) {
<span class="hljs-keyword">return</span>;
}
completer?.complete(pickedFile.path);
<span class="hljs-keyword">if</span> (!context.mounted) {
<span class="hljs-keyword">return</span>;
}
setFile(pickedFile);
<span class="hljs-keyword">if</span> (context.mounted) {
Navigator.of(context).pop();
}
} <span class="hljs-keyword">else</span> <span class="hljs-keyword">if</span> (recordSource == RecordSource.camera) {
<span class="hljs-keyword">final</span> pickedFile = <span class="hljs-keyword">await</span> ImagePickerHelper.takePictureFromCamera();
<span class="hljs-keyword">if</span> (pickedFile == <span class="hljs-keyword">null</span>) {
<span class="hljs-keyword">return</span>;
}
completer?.complete(pickedFile.path);
<span class="hljs-keyword">if</span> (!context.mounted) {
<span class="hljs-keyword">return</span>;
}
setFile(pickedFile);
<span class="hljs-comment">// crop image</span>
_cropImage(pickedFile, assignCroppedImage);
<span class="hljs-keyword">if</span> (context.mounted) {
Navigator.of(context).pop();
}
}
}
<span class="hljs-comment">/// <span class="markdown">modal for selecting file source</span></span>
<span class="hljs-keyword">static</span> Future showFilePickerButtonSheet(BuildContext context, Completer? completer,
<span class="hljs-built_in">Function</span> setFile,
<span class="hljs-built_in">Function</span> assignCroppedImage,) {
<span class="hljs-keyword">return</span> showModalBottomSheet(
shape: <span class="hljs-keyword">const</span> RoundedRectangleBorder(
borderRadius: BorderRadius.only(
topLeft: Radius.circular(<span class="hljs-number">35</span>),
topRight: Radius.circular(<span class="hljs-number">35</span>),
),
),
context: context,
builder: (context) {
<span class="hljs-keyword">return</span> SingleChildScrollView(
child: Container(
padding: <span class="hljs-keyword">const</span> EdgeInsets.fromLTRB(<span class="hljs-number">10</span>, <span class="hljs-number">14</span>, <span class="hljs-number">15</span>, <span class="hljs-number">20</span>),
child: Column(
children: [
Container(
height: <span class="hljs-number">4</span>,
width: <span class="hljs-number">50</span>,
padding: <span class="hljs-keyword">const</span> EdgeInsets.only(top: <span class="hljs-number">5</span>),
decoration: BoxDecoration(
borderRadius: BorderRadius.circular(<span class="hljs-number">7</span>),
color: <span class="hljs-keyword">const</span> Color(<span class="hljs-number">0xffE4E4E4</span>),
),
),
Padding(
padding: <span class="hljs-keyword">const</span> EdgeInsets.all(<span class="hljs-number">10.0</span>),
child: Column(
mainAxisSize: MainAxisSize.min,
crossAxisAlignment: CrossAxisAlignment.start,
children: [
GestureDetector(
onTap: () => Navigator.of(context).pop(),
child: <span class="hljs-keyword">const</span> Align(
alignment: Alignment.topRight,
child: Icon(Icons.close, color: Colors.grey),
),
),
<span class="hljs-keyword">const</span> Gap(<span class="hljs-number">10</span>),
<span class="hljs-keyword">const</span> Text(
<span class="hljs-string">'Select Image Source'</span>,
style: TextStyle(
color: AppColors.primaryColor,
fontSize: <span class="hljs-number">16</span>,
fontWeight: FontWeight.w600,
),
),
<span class="hljs-keyword">const</span> Gap(<span class="hljs-number">20</span>),
ImagePickerTile(
title: <span class="hljs-string">'Capture from Camera'</span>,
subtitle: <span class="hljs-string">'Take a live snapshot'</span>,
icon: Iconsax.camera,
recordSource: RecordSource.camera,
completer: completer,
context: context,
setFile: setFile,
assignCroppedImage: assignCroppedImage,
),
<span class="hljs-keyword">const</span> Divider(color: Color(<span class="hljs-number">0xffE4E4E4</span>)),
ImagePickerTile(
title: <span class="hljs-string">'Upload from Gallery'</span>,
subtitle: <span class="hljs-string">'Select image from gallery'</span>,
icon: Iconsax.gallery,
recordSource: RecordSource.gallery,
completer: completer,
context: context,
setFile: setFile,
assignCroppedImage: assignCroppedImage,
),
],
),
),
],
),
),
);
},
);
}
}
<span class="hljs-class"><span class="hljs-keyword">class</span> <span class="hljs-title">CropAspectRatioPresetCustom</span> <span class="hljs-keyword">implements</span> <span class="hljs-title">CropAspectRatioPresetData</span> </span>{
<span class="hljs-meta">@override</span>
(<span class="hljs-built_in">int</span>, <span class="hljs-built_in">int</span>)? <span class="hljs-keyword">get</span> data => (<span class="hljs-number">2</span>, <span class="hljs-number">3</span>);
<span class="hljs-meta">@override</span>
<span class="hljs-built_in">String</span> <span class="hljs-keyword">get</span> name => <span class="hljs-string">'2x3 (customized)'</span>;
}
The ImageUploadController
class manages the process of picking and optionally cropping images before they are used in the application.
_cropImage(File? selectedFile, Function assignCroppedImage)
:This private static method handles the image cropping functionality using the
image_cropper
package.It takes a
selectedFile
(the image to be cropped) and aFunction assignCroppedImage
(a callback to update the UI with the cropped image).ImageCropper().cropImage(...)
opens the cropping UI. It’s configured with various UI settings for both Android and iOS, includingtoolbarColor
,aspectRatioPresets
, and more, to ensure a consistent and branded experience.CropAspectRatioPresetCustom()
: This is a custom class that implementsCropAspectRatioPresetData
to define a specific cropping aspect ratio (2×3 in this case), providing more flexibility than the built-in presets.Once cropped, the
croppedFile
is passed to theassignCroppedImage
callback.
imagePicker(RecordSource recordSource, Completer? completer, BuildContext context, Function setFile, Function assignCroppedImage)
:This static method is the core logic for initiating image picking from either the camera or gallery.
It takes a
recordSource
(from theRecordSource
enum), an optionalcompleter
(likely for handling asynchronous operations outside the UI), the currentcontext
,setFile
(a callback to set the picked file in the UI), andassignCroppedImage
(the callback for cropped images).Gallery Selection (
RecordSource.gallery
):It calls
ImagePickerHelper.pickImageFromGallery()
to get the selected image.If a file is picked, it completes the
completer
, callssetFile
to update the UI, and then pops the bottom sheet.
Camera Capture (
RecordSource.camera
):It calls
ImagePickerHelper.takePictureFromCamera()
to capture an image.Similar to gallery selection, it completes the
completer
, callssetFile
, and then importantly, it calls_cropImage
to allow the user to crop the newly captured image before it’s fully used.Finally, it pops the bottom sheet.
context.mounted
checks are included to ensure that UI updates only happen if the widget is still in the widget tree, preventing errors.
showFilePickerButtonSheet(...)
:This static method displays a modal bottom sheet, providing the user with options to select an image source (Camera or Gallery).
It uses
showModalBottomSheet
to present a nicely styled sheet with rounded corners.Inside the sheet, it displays a draggable indicator and two
ImagePickerTile
widgets (presumably a custom widget for displaying each option) for “Capture from Camera” and “Upload from Gallery.”When an
ImagePickerTile
is tapped, it internally calls theimagePicker
method with the correspondingRecordSource
.
In summary, ImageUploadController
acts as a central orchestrator for image acquisition, offering options to pick from the gallery or camera, and integrating robust image cropping capabilities – all while ensuring a smooth user experience through UI callbacks and modal interactions.
recipe_controller.dart
:
<span class="hljs-keyword">import</span> <span class="hljs-string">'dart:io'</span>;
<span class="hljs-keyword">import</span> <span class="hljs-string">'package:cached_network_image/cached_network_image.dart'</span>;
<span class="hljs-keyword">import</span> <span class="hljs-string">'package:flutter/foundation.dart'</span>;
<span class="hljs-keyword">import</span> <span class="hljs-string">'package:flutter/material.dart'</span>;
<span class="hljs-keyword">import</span> <span class="hljs-string">'package:flutter_markdown/flutter_markdown.dart'</span>;
<span class="hljs-keyword">import</span> <span class="hljs-string">'package:gap/gap.dart'</span>;
<span class="hljs-keyword">import</span> <span class="hljs-string">'package:google_generative_ai/google_generative_ai.dart'</span>;
<span class="hljs-keyword">import</span> <span class="hljs-string">'package:snap2chef/core/extensions/loading.dart'</span>;
<span class="hljs-keyword">import</span> <span class="hljs-string">'package:youtube_player_flutter/youtube_player_flutter.dart'</span>;
<span class="hljs-keyword">import</span> <span class="hljs-string">'../core/constants/app_colors.dart'</span>;
<span class="hljs-keyword">import</span> <span class="hljs-string">'../core/constants/app_strings.dart'</span>;
<span class="hljs-keyword">import</span> <span class="hljs-string">'../core/constants/enums/status.dart'</span>;
<span class="hljs-keyword">import</span> <span class="hljs-string">'../presentation/components/toast_info.dart'</span>;
<span class="hljs-class"><span class="hljs-keyword">class</span> <span class="hljs-title">RecipeController</span> </span>{
<span class="hljs-comment">// send image to gemini</span>
<span class="hljs-keyword">static</span> Future<<span class="hljs-keyword">void</span>> _sendImageToGemini(
File? selectedFile,
GenerativeModel model,
BuildContext context,
<span class="hljs-built_in">Function</span> removeFile,
<span class="hljs-built_in">Function</span> removeText,
) <span class="hljs-keyword">async</span> {
toastInfo(msg: <span class="hljs-string">"Obtaining recipe and preparations"</span>, status: Status.success);
<span class="hljs-keyword">if</span> (selectedFile == <span class="hljs-keyword">null</span>) <span class="hljs-keyword">return</span>;
<span class="hljs-keyword">final</span> bytes = <span class="hljs-keyword">await</span> selectedFile.readAsBytes();
<span class="hljs-keyword">final</span> prompt = TextPart(AppStrings.AI_TEXT_PART);
<span class="hljs-keyword">final</span> image = DataPart(<span class="hljs-string">'image/jpeg'</span>, bytes);
<span class="hljs-keyword">final</span> response = <span class="hljs-keyword">await</span> model.generateContent([
Content.multi([prompt, image]),
]);
<span class="hljs-keyword">if</span> (context.mounted) {
_displayRecipe(
response.text,
context,
selectedFile,
removeFile,
removeText,
);
}
}
<span class="hljs-comment">// send audio text prompt</span>
<span class="hljs-keyword">static</span> Future<<span class="hljs-keyword">void</span>> _sendAudioTextPrompt(
GenerativeModel model,
BuildContext context,
<span class="hljs-built_in">String</span> transcribedText,
File? selectedFile,
<span class="hljs-built_in">Function</span> removeFile,
<span class="hljs-built_in">Function</span> removeText,
) <span class="hljs-keyword">async</span> {
toastInfo(msg: <span class="hljs-string">"Obtaining recipe and preparations"</span>, status: Status.success);
<span class="hljs-keyword">final</span> prompt = <span class="hljs-string">'<span class="hljs-subst">${AppStrings.AI_AUDIO_PART}</span> <span class="hljs-subst">${transcribedText.trim()}</span>.'</span>;
<span class="hljs-keyword">final</span> content = [Content.text(prompt)];
<span class="hljs-keyword">final</span> response = <span class="hljs-keyword">await</span> model.generateContent(content);
<span class="hljs-keyword">if</span> (context.mounted) {
_displayRecipe(
response.text,
context,
selectedFile,
removeFile,
removeText,
);
}
}
<span class="hljs-keyword">static</span> <span class="hljs-keyword">void</span> _displayRecipe(
<span class="hljs-built_in">String?</span> recipeText,
BuildContext context,
File? selectedFile,
<span class="hljs-built_in">Function</span> removeFile,
<span class="hljs-built_in">Function</span> removeText,
) {
<span class="hljs-keyword">if</span> (recipeText == <span class="hljs-keyword">null</span> || recipeText.isEmpty) {
recipeText = <span class="hljs-string">"No recipe could be generated or parsed from the response."</span>;
}
<span class="hljs-built_in">String</span> workingRecipeText = recipeText;
<span class="hljs-built_in">String?</span> videoId;
<span class="hljs-built_in">String?</span> extractedImageUrl;
<span class="hljs-keyword">final</span> youtubeLineRegex = <span class="hljs-built_in">RegExp</span>(<span class="hljs-string">r'YouTube Video URL:s*(https?://S+)'</span>, caseSensitive: <span class="hljs-keyword">false</span>);
<span class="hljs-keyword">final</span> youtubeMatch = youtubeLineRegex.firstMatch(recipeText);
<span class="hljs-keyword">if</span> (youtubeMatch != <span class="hljs-keyword">null</span>) {
<span class="hljs-keyword">final</span> youtubeUrl = youtubeMatch.group(<span class="hljs-number">1</span>);
<span class="hljs-keyword">final</span> ytIdRegex = <span class="hljs-built_in">RegExp</span>(<span class="hljs-string">r'v=([w-]{11})'</span>);
<span class="hljs-keyword">final</span> ytIdMatch = ytIdRegex.firstMatch(youtubeUrl ?? <span class="hljs-string">''</span>);
<span class="hljs-keyword">if</span> (ytIdMatch != <span class="hljs-keyword">null</span>) {
videoId = ytIdMatch.group(<span class="hljs-number">1</span>);
}
workingRecipeText = workingRecipeText.replaceAll(youtubeMatch.group(<span class="hljs-number">0</span>)!, <span class="hljs-string">''</span>).trim();
}
<span class="hljs-keyword">final</span> imageLine = <span class="hljs-built_in">RegExp</span>(<span class="hljs-string">r'Image URL:s*(https?://S+.(?:png|jpe?g|gif|webp|bmp|svg))'</span>);
<span class="hljs-keyword">final</span> imageMatch = imageLine.firstMatch(recipeText);
<span class="hljs-keyword">if</span> (imageMatch != <span class="hljs-keyword">null</span>) {
extractedImageUrl = imageMatch.group(<span class="hljs-number">1</span>);
workingRecipeText = workingRecipeText.replaceAll(imageMatch.group(<span class="hljs-number">0</span>)!, <span class="hljs-string">''</span>).trim();
}
<span class="hljs-built_in">print</span>(<span class="hljs-string">"Extracted Image URL: <span class="hljs-subst">$extractedImageUrl</span>"</span>);
<span class="hljs-built_in">print</span>(<span class="hljs-string">"Extracted Video ID: <span class="hljs-subst">$videoId</span>"</span>);
<span class="hljs-built_in">String?</span> cleanedRecipeText = workingRecipeText;
showDialog(
barrierDismissible: <span class="hljs-keyword">false</span>,
context: context,
builder: (BuildContext dialogContext) {
YoutubePlayerController? ytController;
<span class="hljs-keyword">if</span> (videoId != <span class="hljs-keyword">null</span>) {
ytController = YoutubePlayerController(
initialVideoId: videoId,
flags: <span class="hljs-keyword">const</span> YoutubePlayerFlags(
autoPlay: <span class="hljs-keyword">false</span>,
mute: <span class="hljs-keyword">false</span>,
disableDragSeek: <span class="hljs-keyword">false</span>,
loop: <span class="hljs-keyword">false</span>,
isLive: <span class="hljs-keyword">false</span>,
forceHD: <span class="hljs-keyword">false</span>,
enableCaption: <span class="hljs-keyword">true</span>,
),
);
}
<span class="hljs-keyword">return</span> AlertDialog(
title: <span class="hljs-keyword">const</span> Text(<span class="hljs-string">'Generated Recipe'</span>),
content: SingleChildScrollView(
child: Column(
mainAxisSize: MainAxisSize.min,
children: [
selectedFile != <span class="hljs-keyword">null</span>
? Container(
height: <span class="hljs-number">150</span>,
width: <span class="hljs-built_in">double</span>.infinity,
decoration: BoxDecoration(
borderRadius: BorderRadius.circular(<span class="hljs-number">7</span>),
border: Border.all(color: AppColors.primaryColor),
image: DecorationImage(
image: FileImage(File(selectedFile.path)),
fit: BoxFit.cover,
),
),
)
: extractedImageUrl != <span class="hljs-keyword">null</span>
? ClipRRect(
borderRadius: BorderRadius.circular(<span class="hljs-number">7</span>),
child: CachedNetworkImage(
imageUrl: extractedImageUrl,
height: <span class="hljs-number">150</span>,
width: <span class="hljs-built_in">double</span>.infinity,
fit: BoxFit.cover,
placeholder: (context, url) =>
Image.asset(<span class="hljs-string">'assets/images/placeholder.png'</span>, fit: BoxFit.cover),
errorWidget: (context, url, error) =>
Image.asset(<span class="hljs-string">'assets/images/placeholder.png'</span>, fit: BoxFit.cover),
),
)
: <span class="hljs-keyword">const</span> SizedBox.shrink(),
Gap(<span class="hljs-number">16</span>),
MarkdownBody(
data: cleanedRecipeText,
styleSheet: MarkdownStyleSheet(
h1: <span class="hljs-keyword">const</span> TextStyle(
fontSize: <span class="hljs-number">24</span>,
fontWeight: FontWeight.bold,
color: Colors.deepPurple,
),
h2: <span class="hljs-keyword">const</span> TextStyle(
fontSize: <span class="hljs-number">20</span>,
fontWeight: FontWeight.bold,
),
strong: <span class="hljs-keyword">const</span> TextStyle(fontWeight: FontWeight.bold),
),
),
<span class="hljs-keyword">if</span> (videoId != <span class="hljs-keyword">null</span> && ytController != <span class="hljs-keyword">null</span>) ...[
<span class="hljs-keyword">const</span> Gap(<span class="hljs-number">16</span>),
YoutubePlayer(
controller: ytController,
showVideoProgressIndicator: <span class="hljs-keyword">true</span>,
progressIndicatorColor: AppColors.primaryColor,
progressColors: <span class="hljs-keyword">const</span> ProgressBarColors(
playedColor: AppColors.primaryColor,
handleColor: Colors.amberAccent,
),
onReady: () {
<span class="hljs-comment">// Controller is ready</span>
},
),
],
],
),
),
actions: <Widget>[
TextButton(
onPressed: () {
ytController?.dispose();
Navigator.of(dialogContext).pop();
<span class="hljs-keyword">if</span> (selectedFile != <span class="hljs-keyword">null</span>) {
removeFile();
} <span class="hljs-keyword">else</span> {
removeText();
}
},
child: <span class="hljs-keyword">const</span> Text(<span class="hljs-string">'Close'</span>),
),
],
);
},
);
}
<span class="hljs-keyword">static</span> <span class="hljs-keyword">void</span> sendRequest(
BuildContext context,
File? selectedFile,
GenerativeModel model,
<span class="hljs-built_in">Function</span> removeFile,
<span class="hljs-built_in">String</span> transcribedText,
<span class="hljs-built_in">Function</span> removeText,
) <span class="hljs-keyword">async</span> {
context.showLoader();
toastInfo(msg: <span class="hljs-string">"Processing..."</span>, status: Status.success);
<span class="hljs-keyword">try</span> {
<span class="hljs-keyword">if</span> (selectedFile != <span class="hljs-keyword">null</span>) {
<span class="hljs-keyword">await</span> _sendImageToGemini(
selectedFile,
model,
context,
removeFile,
removeText,
);
} <span class="hljs-keyword">else</span> <span class="hljs-keyword">if</span> (transcribedText.isNotEmpty) {
<span class="hljs-keyword">await</span> _sendAudioTextPrompt(
model,
context,
transcribedText,
selectedFile,
removeFile,
removeText,
);
}
} <span class="hljs-keyword">catch</span> (e) {
<span class="hljs-keyword">if</span> (kDebugMode) {
<span class="hljs-built_in">print</span>(<span class="hljs-string">'Error sending request: <span class="hljs-subst">$e</span>'</span>);
}
toastInfo(msg: <span class="hljs-string">"Error sending request:<span class="hljs-subst">$e</span> "</span>, status: Status.error);
} <span class="hljs-keyword">finally</span> {
<span class="hljs-keyword">if</span> (context.mounted) {
context.hideLoader();
}
}
}
}
The RecipeController
class is responsible for interacting with the Gemini AI model to generate recipes and then display these recipes to the user, complete with parsed YouTube video links and potentially extracted image URLs.
_sendImageToGemini(File? selectedFile, GenerativeModel model, BuildContext context, Function removeFile, Function removeText)
:This private static method handles sending an image to the Gemini model.
It displays a “Processing…” toast message.
It reads the
selectedFile
(the image) as bytes.It creates a
TextPart
fromAppStrings.AI_TEXT_PART
(our image-based AI prompt) and aDataPart
for the image bytes.model.generateContent([Content.multi([prompt, image])])
: This is where the magic happens! It sends both the text prompt and the image data to the Gemini model for generation.Upon receiving a response, it calls
_displayRecipe
to show the generated recipe to the user.context.mounted
check ensures the context is still valid before attempting UI updates.
_sendAudioTextPrompt(GenerativeModel model, BuildContext context, String transcribedText, File? selectedFile, Function removeFile, Function removeText)
:This private static method handles sending transcribed audio text to the Gemini model.
It constructs a full prompt by concatenating
AppStrings.AI_AUDIO_PART
with thetranscribedText
.model.generateContent([Content.text(prompt)])
: It sends only the text prompt to the Gemini model.Similar to the image method, it calls
_displayRecipe
with the generated text.
_displayRecipe(String? recipeText, BuildContext context, File? selectedFile, Function removeFile, Function removeText)
:This private static method is responsible for parsing the AI’s response and displaying it in a modal dialog.
Error handling: If
recipeText
is null or empty, it provides a default message.Extracting YouTube video URL: It uses a
RegExp
(youtubeLineRegex
) to find a line in therecipeText
that matches the “YouTube Video URL: https://…” pattern. If found, it extracts the full URL and then anotherRegExp
(ytIdRegex
) to get the YouTube video ID. The extracted video URL text is then removed fromworkingRecipeText
to clean the displayed recipe.Extracting image URL: Similarly, it uses another
RegExp
(imageLine
) to extract an image URL from therecipeText
. The extracted image URL text is also removed.Debug printing: Prints the extracted URLs for debugging.
showDialog
: Presents anAlertDialog
to the user.YoutubePlayerController
: If avideoId
was extracted, it initializes aYoutubePlayerController
from theYoutubeer_flutter
package, configured with basic flags (for example,autoPlay: false
).Recipe display:
If an
selectedFile
(image taken by the user) is present, it displays that image.Otherwise, if an
extractedImageUrl
was found in the AI’s response, it usesCachedNetworkImage
to display that image. This is particularly useful for text-based queries where Gemini might suggest an image.MarkdownBody
: Usesflutter_markdown
to render thecleanedRecipeText
(after removing the YouTube and Image URLs) as Markdown, allowing for rich text formatting (for example, bolding, headings) directly from the AI’s response.YoutubePlayer
: If avideoId
andytController
are available, it embeds the YouTube video player directly into the dialog, with customizable progress bar colors.
“Close” button: Disposes the
ytController
(important for resource management), pops the dialog, and calls eitherremoveFile()
orremoveText()
to clear the input fields based on what was used for the query.
sendRequest(BuildContext context, File? selectedFile, GenerativeModel model, Function removeFile, String transcribedText, Function removeText)
:This public static method is the entry point for sending requests to the Gemini model.
context.showLoader()
: Displays a loading overlay using our custom extension.toastInfo(msg: "Processing...", status: Status.success)
: Shows a toast message.Conditional logic:
If
selectedFile
is not null, it calls_sendImageToGemini
.Otherwise, if
transcribedText
is not empty, it calls_sendAudioTextPrompt
.
Error handling: Uses a
try-catch
block to gracefully handle any errors during the AI request, logging them in debug mode and showing an error toast to the user.finally
Block: Ensurescontext.hideLoader()
is always called, regardless of success or error, to dismiss the loading indicator.
In essence, RecipeController
orchestrates the entire process of sending user input (image or voice), communicating with the Gemini AI, parsing its intelligent response, and beautifully presenting it to the user with interactive elements like YouTube videos and relevant images.
3. The presentation
Folder
This folder contains all the UI-related code.
screens/home_screen.dart
:
<span class="hljs-keyword">import</span> <span class="hljs-string">'dart:async'</span>;
<span class="hljs-keyword">import</span> <span class="hljs-string">'dart:io'</span>;
<span class="hljs-keyword">import</span> <span class="hljs-string">'package:flutter/material.dart'</span>;
<span class="hljs-keyword">import</span> <span class="hljs-string">'package:gap/gap.dart'</span>;
<span class="hljs-keyword">import</span> <span class="hljs-string">'package:google_generative_ai/google_generative_ai.dart'</span>;
<span class="hljs-keyword">import</span> <span class="hljs-string">'package:iconsax/iconsax.dart'</span>;
<span class="hljs-keyword">import</span> <span class="hljs-string">'package:image_cropper/image_cropper.dart'</span>;
<span class="hljs-keyword">import</span> <span class="hljs-string">'package:snap2chef/core/extensions/format_to_mb.dart'</span>;
<span class="hljs-keyword">import</span> <span class="hljs-string">'package:snap2chef/infrastructure/image_upload_controller.dart'</span>;
<span class="hljs-keyword">import</span> <span class="hljs-string">'package:snap2chef/infrastructure/recipe_controller.dart'</span>;
<span class="hljs-keyword">import</span> <span class="hljs-string">'package:speech_to_text/speech_recognition_result.dart'</span>;
<span class="hljs-keyword">import</span> <span class="hljs-string">'package:speech_to_text/speech_to_text.dart'</span>;
<span class="hljs-keyword">import</span> <span class="hljs-string">'../../core/constants/app_colors.dart'</span>;
<span class="hljs-keyword">import</span> <span class="hljs-string">'../../core/constants/app_strings.dart'</span>;
<span class="hljs-keyword">import</span> <span class="hljs-string">'../../core/constants/enums/status.dart'</span>;
<span class="hljs-keyword">import</span> <span class="hljs-string">'../components/toast_info.dart'</span>;
<span class="hljs-keyword">import</span> <span class="hljs-string">'../widgets/glowing_microphone.dart'</span>;
<span class="hljs-keyword">import</span> <span class="hljs-string">'../widgets/image_previewer.dart'</span>;
<span class="hljs-keyword">import</span> <span class="hljs-string">'../widgets/query_text_box.dart'</span>;
<span class="hljs-keyword">import</span> <span class="hljs-string">'../widgets/upload_container.dart'</span>;
<span class="hljs-class"><span class="hljs-keyword">class</span> <span class="hljs-title">HomeScreen</span> <span class="hljs-keyword">extends</span> <span class="hljs-title">StatefulWidget</span> </span>{
<span class="hljs-keyword">const</span> HomeScreen({<span class="hljs-keyword">super</span>.key});
<span class="hljs-meta">@override</span>
State<HomeScreen> createState() => _HomeScreenState();
}
<span class="hljs-class"><span class="hljs-keyword">class</span> <span class="hljs-title">_HomeScreenState</span> <span class="hljs-keyword">extends</span> <span class="hljs-title">State</span><<span class="hljs-title">HomeScreen</span>> </span>{
File? selectedFile;
Completer? completer;
<span class="hljs-built_in">String?</span> fileName;
<span class="hljs-built_in">int?</span> fileSize;
<span class="hljs-keyword">late</span> GenerativeModel _model;
<span class="hljs-built_in">String</span> apiKey = <span class="hljs-string">""</span>; <span class="hljs-comment">// <--- REPLACE WITH YOUR ACTUAL API KEY</span>
<span class="hljs-keyword">final</span> TextEditingController _query = TextEditingController();
<span class="hljs-keyword">final</span> SpeechToText _speechToText = SpeechToText();
<span class="hljs-built_in">bool</span> _speechEnabled = <span class="hljs-keyword">false</span>;
<span class="hljs-built_in">String</span> _lastWords = <span class="hljs-string">''</span>;
<span class="hljs-built_in">bool</span> isRecording = <span class="hljs-keyword">false</span>;
<span class="hljs-built_in">bool</span> isDoneRecording = <span class="hljs-keyword">false</span>;
<span class="hljs-keyword">void</span> removeText() {
setState(() {
_query.clear();
isDoneRecording = <span class="hljs-keyword">false</span>;
_lastWords = <span class="hljs-string">""</span>;
});
_query.clear();
}
<span class="hljs-keyword">void</span> setKeyword(<span class="hljs-built_in">String</span> prompt) {
<span class="hljs-keyword">if</span> (prompt.isEmpty) {
toastInfo(msg: <span class="hljs-string">"You didn't say anything!"</span>, status: Status.error);
setState(() {
isDoneRecording = <span class="hljs-keyword">false</span>;
isRecording = <span class="hljs-keyword">false</span>;
});
<span class="hljs-keyword">return</span>;
}
setState(() {
_lastWords = <span class="hljs-string">""</span>;
isRecording = <span class="hljs-keyword">false</span>;
_query.text = prompt;
isDoneRecording = <span class="hljs-keyword">true</span>;
});
}
<span class="hljs-keyword">void</span> _initSpeech() <span class="hljs-keyword">async</span> {
<span class="hljs-keyword">try</span> {
_speechEnabled = <span class="hljs-keyword">await</span> _speechToText.initialize(
onStatus: (status) => debugPrint(<span class="hljs-string">'Speech status: <span class="hljs-subst">$status</span>'</span>),
onError: (error) => debugPrint(<span class="hljs-string">'Speech error: <span class="hljs-subst">$error</span>'</span>),
);
<span class="hljs-keyword">if</span> (!_speechEnabled) {
toastInfo(
msg: <span class="hljs-string">"Microphone permission not granted or speech not available."</span>,
status: Status.error,
);
}
setState(() {});
} <span class="hljs-keyword">catch</span> (e) {
debugPrint(<span class="hljs-string">"Speech initialization failed: <span class="hljs-subst">$e</span>"</span>);
}
}
<span class="hljs-keyword">void</span> _startListening() <span class="hljs-keyword">async</span> {
setState(() {
isRecording = <span class="hljs-keyword">true</span>;
});
<span class="hljs-keyword">if</span> (!_speechEnabled) {
toastInfo(msg: <span class="hljs-string">"Speech not initialized yet."</span>, status: Status.error);
<span class="hljs-keyword">return</span>;
}
<span class="hljs-keyword">await</span> _speechToText.listen(onResult: _onSpeechResult);
setState(() {});
}
<span class="hljs-keyword">void</span> _stopListening() <span class="hljs-keyword">async</span> {
<span class="hljs-keyword">await</span> _speechToText.stop();
setKeyword(_lastWords);
setState(() {});
}
<span class="hljs-keyword">void</span> _onSpeechResult(SpeechRecognitionResult result) {
setState(() {
_lastWords = result.recognizedWords;
});
}
<span class="hljs-meta">@override</span>
<span class="hljs-keyword">void</span> initState() {
<span class="hljs-keyword">super</span>.initState();
<span class="hljs-comment">// <span class="hljs-doctag">TODO:</span> Replace "YOUR_API_KEY" with your actual Gemini API Key</span>
<span class="hljs-comment">// Refer to https://www.freecodecamp.org/news/how-to-secure-mobile-apis-in-flutter/ for API key security.</span>
apiKey = <span class="hljs-string">"YOUR_API_KEY"</span>; <span class="hljs-comment">// Secure this!</span>
_model = GenerativeModel(model: AppStrings.AI_MODEL, apiKey: apiKey);
_initSpeech();
}
<span class="hljs-meta">@override</span>
<span class="hljs-keyword">void</span> dispose() {
_query.dispose();
_speechToText.cancel(); <span class="hljs-comment">// Cancel listening to prevent resource leaks</span>
<span class="hljs-keyword">super</span>.dispose();
}
<span class="hljs-keyword">void</span> assignCroppedImage(CroppedFile? croppedFile) {
<span class="hljs-keyword">if</span> (croppedFile != <span class="hljs-keyword">null</span>) {
setState(() {
selectedFile = File(croppedFile.path);
});
}
}
<span class="hljs-keyword">void</span> setFile(File? pickedFile) {
setState(() {
selectedFile = pickedFile;
fileName = pickedFile?.path.split(<span class="hljs-string">'/'</span>).last;
fileSize = pickedFile?.lengthSync().formatToMegaByte();
});
}
<span class="hljs-keyword">void</span> removeFile() {
setState(() {
selectedFile = <span class="hljs-keyword">null</span>;
fileSize = <span class="hljs-keyword">null</span>;
});
}
<span class="hljs-meta">@override</span>
Widget build(BuildContext context) {
Size size = MediaQuery.sizeOf(context);
<span class="hljs-keyword">return</span> Scaffold(
floatingActionButton: selectedFile != <span class="hljs-keyword">null</span> || _query.text.isNotEmpty
? FloatingActionButton.extended(
onPressed: () => RecipeController.sendRequest(
context,
selectedFile,
_model,
removeFile,
_query.text,
removeText,
),
backgroundColor: AppColors.primaryColor,
icon: <span class="hljs-keyword">const</span> Icon(Iconsax.send_1, color: Colors.white),
label: <span class="hljs-keyword">const</span> Text(
<span class="hljs-string">"Send Request"</span>,
style: TextStyle(color: Colors.white),
),
)
: <span class="hljs-keyword">null</span>,
body: Padding(
padding: <span class="hljs-keyword">const</span> EdgeInsets.all(<span class="hljs-number">18.0</span>),
child: Center(
child: Column(
mainAxisAlignment: MainAxisAlignment.center,
children: [
Text(
AppStrings.APP_TITLE,
textAlign: TextAlign.center,
style: TextStyle(
color: Colors.black,
fontWeight: FontWeight.w500,
fontSize: <span class="hljs-number">16</span>,
),
),
Text(
AppStrings.APP_SUBTITLE,
textAlign: TextAlign.center,
style: TextStyle(
color: AppColors.grey,
fontSize: <span class="hljs-number">15</span>,
fontWeight: FontWeight.w300,
),
),
<span class="hljs-keyword">const</span> Gap(<span class="hljs-number">20</span>),
<span class="hljs-keyword">if</span> (!isDoneRecording)
!isRecording
? selectedFile != <span class="hljs-keyword">null</span>
? ImagePreviewer(
size: size,
pickedFile: selectedFile,
removeFile: removeFile,
context: context,
completer: completer,
setFile: setFile,
assignCroppedImage: assignCroppedImage,
)
: GestureDetector(
onTap: () =>
ImageUploadController.showFilePickerButtonSheet(
context,
completer,
setFile,
assignCroppedImage,
),
child: UploadContainer(
title: <span class="hljs-string">'an image of a food or snack'</span>,
size: size,
),
)
: SizedBox.shrink(),
<span class="hljs-keyword">const</span> Gap(<span class="hljs-number">20</span>),
<span class="hljs-keyword">if</span> (selectedFile == <span class="hljs-keyword">null</span>) ...[
<span class="hljs-keyword">if</span> (!isDoneRecording) ...[
Text(
<span class="hljs-string">"or record your voice"</span>,
style: TextStyle(
color: AppColors.grey,
fontSize: <span class="hljs-number">16</span>,
fontWeight: FontWeight.w200,
),
),
Center(
child: GestureDetector(
onTap: () {
<span class="hljs-keyword">if</span> (!_speechEnabled) {
toastInfo(
msg: <span class="hljs-string">"Speech recognition not ready yet."</span>,
status: Status.error,
);
<span class="hljs-keyword">return</span>;
}
<span class="hljs-keyword">if</span> (_speechToText.isNotListening) {
_startListening();
} <span class="hljs-keyword">else</span> {
_stopListening();
}
},
child: GlowingMicButton(
isListening: !_speechToText.isNotListening,
),
),
),
<span class="hljs-keyword">const</span> Gap(<span class="hljs-number">10</span>),
Container(
padding: EdgeInsets.all(<span class="hljs-number">16</span>),
child: Text(
_speechToText.isListening
? _lastWords
: _speechEnabled
? <span class="hljs-string">'Tap the microphone to start listening...'</span>
: <span class="hljs-string">'Speech not available'</span>,
),
),
<span class="hljs-keyword">const</span> Gap(<span class="hljs-number">10</span>),
],
isDoneRecording
? QueryTextBox(query: _query)
: SizedBox.shrink(),
],
<span class="hljs-keyword">const</span> Gap(<span class="hljs-number">20</span>),
selectedFile != <span class="hljs-keyword">null</span> || _query.text.isNotEmpty
? GestureDetector(
onTap: () {
<span class="hljs-keyword">if</span> (selectedFile != <span class="hljs-keyword">null</span>) {
removeFile();
} <span class="hljs-keyword">else</span> {
removeText();
}
},
child: CircleAvatar(
backgroundColor: AppColors.primaryColor,
radius: <span class="hljs-number">30</span>,
child: Icon(Iconsax.close_circle, color: Colors.white),
),
)
: SizedBox.shrink(),
],
),
),
),
);
}
}
The HomeScreen
is the main user interface of our AI cooking assistant application. It manages the state for image selection, voice input, and triggers the AI recipe generation.
State variables:
selectedFile
: Stores theFile
object of the image picked by the user.completer
: ACompleter
object, often used for asynchronous operations to signal completion.fileName
,fileSize
: Store details about the selected image._model
: An instance ofGenerativeModel
from thegoogle_generative_ai
package, which is our interface to the Gemini API.apiKey
: Crucially, this is where you’ll insert your Gemini API key. Remember the security warning above!_query
: ATextEditingController
for the text input field, which will display the transcribed voice input._speechToText
: An instance ofSpeechToText
for handling voice recognition._speechEnabled
: A boolean indicating if speech recognition is initialized and available._lastWords
: Stores the most recently recognized words from speech.isRecording
: A boolean to track if voice recording is active.isDoneRecording
: A boolean to track if a voice recording has been completed and transcribed.
Methods:
removeText()
: Clears the text input field (_query
), resetsisDoneRecording
and_lastWords
to clear any previous voice input.setKeyword(String prompt)
: Sets the_query
text to theprompt
(transcribed voice), and updatesisRecording
andisDoneRecording
states. It also provides a toast message if the prompt is empty._initSpeech()
: Initializes theSpeechToText
plugin. It requests microphone permission and sets_speechEnabled
based on the initialization success. If permissions are not granted, it shows an error toast._startListening()
: Starts the speech recognition listener. SetsisRecording
totrue
._stopListening()
: Stops the speech recognition listener and callssetKeyword
with the_lastWords
to finalize the transcribed text._onSpeechResult(SpeechRecognitionResult result)
: Callback method forSpeechToText
that updates_lastWords
with the recognized words as speech recognition progresses.initState()
: Called when the widget is inserted into the widget tree. It initializes the_model
with the Gemini API key and model name, and calls_initSpeech()
to set up voice recognition.dispose()
: Called when the widget is removed from the widget tree. It disposes of the_query
controller and cancels the_speechToText
listener to prevent memory leaks.assignCroppedImage(CroppedFile? croppedFile)
: Callback function passed toImageUploadController
to updateselectedFile
with the path of the newly cropped image.setFile(File? pickedFile)
: Callback function passed toImageUploadController
to updateselectedFile
with the picked image, and also extracts itsfileName
andfileSize
using our custom extension.removeFile()
: Clears theselectedFile
andfileSize
states, effectively removing the displayed image.
build(BuildContext context)
Method – UI Layout:FloatingActionButton.extended
: This button, labeled “Send Request,” becomes visible only when an image (selectedFile
) is chosen OR when there’s text in the query box (_query.text.isNotEmpty
). Tapping it triggersRecipeController.sendRequest
with the relevant input.App title and subtitle: Displays the main headings using
AppStrings
.Image upload/preview section:
If
!isDoneRecording
(meaning no voice input has been finalized) and!isRecording
(not currently recording voice):If
selectedFile
is not null, it shows anImagePreviewer
widget to display the chosen image with an option to remove it.Otherwise (no image selected), it displays an
UploadContainer
which acts as a tappable area to triggerImageUploadController.showFilePickerButtonSheet
for picking an image.
Voice input section:
This section (
if (selectedFile == null) ...
) only appears if no image is selected, providing an alternative input method.If
!isDoneRecording
, it shows a “or record your voice” text and aGlowingMicButton
.Tapping the
GlowingMicButton
toggles speech recognition (_startListening
/_stopListening
).A
Text
widget displays the current speech recognition status or_lastWords
as they are transcribed.
If
isDoneRecording
(meaning voice input has been finalized), it shows aQueryTextBox
which displays the transcribed text, allowing for review before sending the request.
Clear input button: A
CircleAvatar
with a close icon appears when either an image is selected or text is present in the query. Tapping it callsremoveFile()
orremoveText()
to clear the respective input.
Overall, HomeScreen
intelligently adapts its UI based on user input (image or voice) and orchestrates the interaction with the ImageUploadController
for image handling and the RecipeController
for AI recipe generation.
The components
Folder
This folder contains smaller, reusable UI elements.
toast_info.dart
<span class="hljs-keyword">import</span> <span class="hljs-string">'package:fluttertoast/fluttertoast.dart'</span>;
<span class="hljs-keyword">import</span> <span class="hljs-string">'../../core/constants/app_colors.dart'</span>;
<span class="hljs-keyword">import</span> <span class="hljs-string">'package:flutter/material.dart'</span>; <span class="hljs-comment">// Import for MaterialColor/Colors</span>
<span class="hljs-keyword">void</span> toastInfo({
<span class="hljs-keyword">required</span> <span class="hljs-built_in">String</span> msg,
<span class="hljs-keyword">required</span> Status status,
}) {
Fluttertoast.showToast(
msg: msg,
toastLength: Toast.LENGTH_SHORT,
gravity: ToastGravity.BOTTOM,
timeInSecForIosWeb: <span class="hljs-number">1</span>,
backgroundColor: status == Status.success ? AppColors.primaryColor : AppColors.errorColor,
textColor: Colors.white,
fontSize: <span class="hljs-number">16.0</span>,
);
}
The toastInfo
function provides a convenient way to display brief, non-intrusive messages (toasts) to the user, typically for feedback like “success” or “error” messages.
It takes two required parameters:
msg
: The message string to be displayed in the toast.status
: An enum of typeStatus
(success
orerror
) which determines the background color of the toast.
Fluttertoast.showToast(...)
is the core function from the fluttertoast
package that displays the toast.
toastLength
: Sets the duration the toast is visible (short).gravity
: Positions the toast at the bottom of the screen.timeInSecForIosWeb
: Duration for web/iOS.backgroundColor
: Dynamically set toAppColors.primaryColor
for success andAppColors.errorColor
for errors, providing visual cues to the user.textColor
: Sets the text color to white.fontSize
: Sets the font size of the toast message.
This function centralizes toast message display, ensuring consistency in appearance and behavior throughout the app.
The widgets
Folder
The application’s user interface is constructed using a series of well-defined, reusable Flutter widgets. Each widget serves a specific purpose, contributing to the overall functionality and aesthetic of Snap2Chef.
1. glowing_microphone.dart
:
This widget creates an animated microphone button that visually indicates when the application is actively listening for speech input.
<span class="hljs-keyword">import</span> <span class="hljs-string">'package:flutter/material.dart'</span>;
<span class="hljs-keyword">import</span> <span class="hljs-string">'package:iconsax/iconsax.dart'</span>;
<span class="hljs-keyword">import</span> <span class="hljs-string">'../../core/constants/app_colors.dart'</span>;
<span class="hljs-class"><span class="hljs-keyword">class</span> <span class="hljs-title">GlowingMicButton</span> <span class="hljs-keyword">extends</span> <span class="hljs-title">StatefulWidget</span> </span>{
<span class="hljs-keyword">final</span> <span class="hljs-built_in">bool</span> isListening;
<span class="hljs-keyword">const</span> GlowingMicButton({<span class="hljs-keyword">super</span>.key, <span class="hljs-keyword">required</span> <span class="hljs-keyword">this</span>.isListening});
<span class="hljs-meta">@override</span>
State<GlowingMicButton> createState() => _GlowingMicButtonState();
}
<span class="hljs-class"><span class="hljs-keyword">class</span> <span class="hljs-title">_GlowingMicButtonState</span> <span class="hljs-keyword">extends</span> <span class="hljs-title">State</span><<span class="hljs-title">GlowingMicButton</span>>
<span class="hljs-title">with</span> <span class="hljs-title">SingleTickerProviderStateMixin</span> </span>{
<span class="hljs-keyword">late</span> <span class="hljs-keyword">final</span> AnimationController _controller;
<span class="hljs-keyword">late</span> <span class="hljs-keyword">final</span> Animation<<span class="hljs-built_in">double</span>> _animation;
<span class="hljs-meta">@override</span>
<span class="hljs-keyword">void</span> initState() {
<span class="hljs-keyword">super</span>.initState();
_controller = AnimationController(
vsync: <span class="hljs-keyword">this</span>,
duration: <span class="hljs-keyword">const</span> <span class="hljs-built_in">Duration</span>(seconds: <span class="hljs-number">2</span>),
);
_animation = Tween<<span class="hljs-built_in">double</span>>(begin: <span class="hljs-number">0.0</span>, end: <span class="hljs-number">25.0</span>).animate(
CurvedAnimation(parent: _controller, curve: Curves.easeOut),
);
<span class="hljs-keyword">if</span> (widget.isListening) {
_controller.repeat(reverse: <span class="hljs-keyword">true</span>);
}
}
<span class="hljs-meta">@override</span>
<span class="hljs-keyword">void</span> didUpdateWidget(<span class="hljs-keyword">covariant</span> GlowingMicButton oldWidget) {
<span class="hljs-keyword">super</span>.didUpdateWidget(oldWidget);
<span class="hljs-keyword">if</span> (widget.isListening && !_controller.isAnimating) {
_controller.repeat(reverse: <span class="hljs-keyword">true</span>);
} <span class="hljs-keyword">else</span> <span class="hljs-keyword">if</span> (!widget.isListening && _controller.isAnimating) {
_controller.stop();
}
}
<span class="hljs-meta">@override</span>
<span class="hljs-keyword">void</span> dispose() {
_controller.dispose();
<span class="hljs-keyword">super</span>.dispose();
}
<span class="hljs-meta">@override</span>
Widget build(BuildContext context) {
<span class="hljs-keyword">return</span> SizedBox(
width: <span class="hljs-number">100</span>, <span class="hljs-comment">// Enough space for the full glow</span>
height: <span class="hljs-number">100</span>,
child: Stack(
alignment: Alignment.center,
children: [
<span class="hljs-keyword">if</span> (widget.isListening)
AnimatedBuilder(
animation: _animation,
builder: (_, __) {
<span class="hljs-keyword">return</span> Container(
width: <span class="hljs-number">60</span> + _animation.value,
height: <span class="hljs-number">60</span> + _animation.value,
decoration: BoxDecoration(
shape: BoxShape.circle,
color: AppColors.primaryColor.withOpacity(<span class="hljs-number">0.15</span>),
),
);
},
),
CircleAvatar(
backgroundColor: AppColors.primaryColor,
radius: <span class="hljs-number">30</span>,
child: Icon(
widget.isListening ? Iconsax.stop_circle : Iconsax.microphone,
color: Colors.white,
),
),
],
),
);
}
}
GlowingMicButton
(StatefulWidget): This is aStatefulWidget
because it needs to manage its own animation state. It takes afinal bool isListening
property, which dictates whether the microphone should display a glowing animation or remain static.
_GlowingMicButtonState
(State withSingleTickerProviderStateMixin
):SingleTickerProviderStateMixin
: This mixin is crucial for providing aTicker
to anAnimationController
. ATicker
essentially drives the animation forward, linking it to the frame callbacks, ensuring smooth animation performance._controller
(AnimationController): Manages the animation. It’s initialized withvsync: this
(fromSingleTickerProviderStateMixin
) and aduration
of 2 seconds._animation
(Animation<double>): Defines the range of values the animation will produce. Here, aTween<double>(begin: 0.0, end: 25.0)
is used with aCurvedAnimation
(specificallyCurves.easeOut
) to create a smooth, decelerating effect as the glow expands.initState()
: When the widget is first created, theAnimationController
andAnimation
are initialized. IfisListening
is initiallytrue
, the animation is set torepeat(reverse: true)
to make the glow pulse in and out continuously.didUpdateWidget()
: This lifecycle method is called when the widget’s configuration (its properties) changes. It checks ifisListening
has changed and starts or stops the animation accordingly. This ensures the animation dynamically responds to changes in theisListening
state from its parent.dispose()
: Crucially, the_controller.dispose()
method is called here to release the resources held by the animation controller when the widget is removed from the widget tree, preventing memory leaks.
build()
Method:SizedBox
: Provides a fixed size (100×100) for the button, ensuring enough space for the glowing effect.Stack
: Allows layering widgets on top of each other.if (widget.isListening) AnimatedBuilder(...)
: This conditional renders the glowing effect only whenisListening
istrue
.AnimatedBuilder
: Rebuilds its child whenever the_animation
changes value.Inside
AnimatedBuilder
, aContainer
is used to create the circular glow. Itswidth
andheight
are dynamically increased by_animation.value
, creating the expanding effect. Thecolor
isAppColors.primaryColor
with0.15
opacity, giving it a subtle glow.
CircleAvatar
: This is the main microphone button.backgroundColor
isAppColors.primaryColor
.radius
is30
.The
child
is anIcon
from theIconsax
package, dynamically changing betweenIconsax.stop_circle
(when listening) andIconsax.microphone
(when not listening). The icon color is white.
2. image_picker_component.dart
This widget provides a reusable ListTile
interface for users to select images from either the camera or the gallery.
<span class="hljs-keyword">import</span> <span class="hljs-string">'dart:async'</span>;
<span class="hljs-keyword">import</span> <span class="hljs-string">'package:flutter/cupertino.dart'</span>;
<span class="hljs-keyword">import</span> <span class="hljs-string">'package:flutter/material.dart'</span>;
<span class="hljs-keyword">import</span> <span class="hljs-string">'package:snap2chef/infrastructure/image_upload_controller.dart'</span>;
<span class="hljs-keyword">import</span> <span class="hljs-string">'../../core/constants/app_colors.dart'</span>;
<span class="hljs-keyword">import</span> <span class="hljs-string">'../../core/constants/enums/record_source.dart'</span>;
<span class="hljs-class"><span class="hljs-keyword">class</span> <span class="hljs-title">ImagePickerTile</span> <span class="hljs-keyword">extends</span> <span class="hljs-title">StatelessWidget</span> </span>{
<span class="hljs-keyword">const</span> ImagePickerTile({
<span class="hljs-keyword">super</span>.key,
<span class="hljs-keyword">required</span> <span class="hljs-keyword">this</span>.title,
<span class="hljs-keyword">required</span> <span class="hljs-keyword">this</span>.subtitle,
<span class="hljs-keyword">required</span> <span class="hljs-keyword">this</span>.icon,
<span class="hljs-keyword">required</span> <span class="hljs-keyword">this</span>.recordSource,
<span class="hljs-keyword">required</span> <span class="hljs-keyword">this</span>.completer,
<span class="hljs-keyword">required</span> <span class="hljs-keyword">this</span>.context,
<span class="hljs-keyword">required</span> <span class="hljs-keyword">this</span>.setFile,
<span class="hljs-keyword">required</span> <span class="hljs-keyword">this</span>.assignCroppedImage,
});
<span class="hljs-keyword">final</span> <span class="hljs-built_in">String</span> title;
<span class="hljs-keyword">final</span> <span class="hljs-built_in">String</span> subtitle;
<span class="hljs-keyword">final</span> IconData icon;
<span class="hljs-keyword">final</span> RecordSource recordSource;
<span class="hljs-keyword">final</span> Completer? completer;
<span class="hljs-keyword">final</span> BuildContext context;
<span class="hljs-keyword">final</span> <span class="hljs-built_in">Function</span> setFile;
<span class="hljs-keyword">final</span> <span class="hljs-built_in">Function</span> assignCroppedImage;
<span class="hljs-meta">@override</span>
Widget build(BuildContext context) {
<span class="hljs-keyword">return</span> ListTile(
leading: CircleAvatar(
backgroundColor: AppColors.litePrimary,
child: Padding(
padding: <span class="hljs-keyword">const</span> EdgeInsets.all(<span class="hljs-number">3.0</span>),
child: Center(
child: Icon(icon, color: AppColors.primaryColor, size: <span class="hljs-number">20</span>),
),
),
),
title: Text(title, style: <span class="hljs-keyword">const</span> TextStyle(color: Colors.black)),
subtitle: Text(
subtitle,
style: <span class="hljs-keyword">const</span> TextStyle(fontSize: <span class="hljs-number">14</span>, color: Colors.grey),
),
trailing: <span class="hljs-keyword">const</span> Icon(
CupertinoIcons.chevron_right,
size: <span class="hljs-number">20</span>,
color: Color(<span class="hljs-number">0xffE4E4E4</span>),
),
onTap: () {
ImageUploadController.imagePicker(
recordSource,
completer,
context,
setFile,
assignCroppedImage,
);
},
);
}
}
ImagePickerTile
(StatelessWidget): This is aStatelessWidget
because it simply renders content based on its immutable properties and triggers an external function (ImageUploadController.imagePicker
) when tapped.Properties: It takes several
final
properties to make it highly customizable:title
andsubtitle
: Text for the main and secondary lines of the list tile.icon
: TheIconData
to display as the leading icon.recordSource
: An enum (RecordSource
) likely indicating if the image should be picked from the camera or gallery.completer
: ACompleter
object, often used for asynchronous operations to signal when a task is complete.context
: TheBuildContext
to allow theImageUploadController
to show dialogs or navigate.setFile
: AFunction
callback to update the selected image file in the parent widget.assignCroppedImage
: AFunction
callback to handle the result of any image cropping operation.
build()
Method:ListTile
: A standard Flutter widget used to arrange elements in a single row.leading
: Displays aCircleAvatar
with a light primary background color, containing the specifiedicon
in the primary color. This creates a visually appealing icon button on the left.title
: Displays thetitle
text in black.subtitle
: Displays thesubtitle
text in grey with a font size of 14, providing additional descriptive information.trailing
: Shows aCupertinoIcons.chevron_right
(right arrow) icon, common for indicating navigation or actionable items in a list.onTap
: This is the primary interaction point. When theListTile
is tapped, it calls the static methodImageUploadController.imagePicker
, passing all the necessary parameters. This centralizes the image picking logic withinImageUploadController
, making theImagePickerTile
purely a UI component.
3. image_previewer.dart
This widget is responsible for displaying a previously picked image and offering options to ‘Edit’ (re-pick) or ‘Remove’ the image.
<span class="hljs-keyword">import</span> <span class="hljs-string">'dart:async'</span>;
<span class="hljs-keyword">import</span> <span class="hljs-string">'dart:io'</span>;
<span class="hljs-keyword">import</span> <span class="hljs-string">'package:flutter/material.dart'</span>;
<span class="hljs-keyword">import</span> <span class="hljs-string">'package:iconsax/iconsax.dart'</span>;
<span class="hljs-keyword">import</span> <span class="hljs-string">'package:snap2chef/infrastructure/image_upload_controller.dart'</span>;
<span class="hljs-class"><span class="hljs-keyword">class</span> <span class="hljs-title">ImagePreviewer</span> <span class="hljs-keyword">extends</span> <span class="hljs-title">StatelessWidget</span> </span>{
<span class="hljs-keyword">const</span> ImagePreviewer({
<span class="hljs-keyword">super</span>.key,
<span class="hljs-keyword">required</span> <span class="hljs-keyword">this</span>.size,
<span class="hljs-keyword">required</span> <span class="hljs-keyword">this</span>.pickedFile,
<span class="hljs-keyword">required</span> <span class="hljs-keyword">this</span>.removeFile,
<span class="hljs-keyword">required</span> <span class="hljs-keyword">this</span>.context,
<span class="hljs-keyword">required</span> <span class="hljs-keyword">this</span>.completer,
<span class="hljs-keyword">required</span> <span class="hljs-keyword">this</span>.setFile,
<span class="hljs-keyword">required</span> <span class="hljs-keyword">this</span>.assignCroppedImage,
});
<span class="hljs-keyword">final</span> Size size;
<span class="hljs-keyword">final</span> File? pickedFile;
<span class="hljs-keyword">final</span> <span class="hljs-built_in">Function</span> removeFile;
<span class="hljs-keyword">final</span> BuildContext context;
<span class="hljs-keyword">final</span> Completer? completer;
<span class="hljs-keyword">final</span> <span class="hljs-built_in">Function</span> setFile;
<span class="hljs-keyword">final</span> <span class="hljs-built_in">Function</span> assignCroppedImage;
<span class="hljs-meta">@override</span>
Widget build(BuildContext context) {
<span class="hljs-keyword">return</span> Container(
height: size.height * <span class="hljs-number">0.13</span>,
width: <span class="hljs-built_in">double</span>.infinity,
decoration: BoxDecoration(
borderRadius: BorderRadius.circular(<span class="hljs-number">7</span>),
<span class="hljs-comment">// border: Border.all(</span>
<span class="hljs-comment">// color: AppColors.borderColor,</span>
<span class="hljs-comment">// ),</span>
image: DecorationImage(
image: FileImage(
File(pickedFile!.path),
),
fit: BoxFit.cover,
),
),
child: Stack(
children: [
Container(
decoration: BoxDecoration(
color: Colors.black.withOpacity(<span class="hljs-number">0.3</span>),
borderRadius: BorderRadius.circular(<span class="hljs-number">7</span>),
),
),
<span class="hljs-comment">// Centered content</span>
Center(
child: Wrap(
crossAxisAlignment: WrapCrossAlignment.center,
spacing: <span class="hljs-number">20</span>,
children: [
GestureDetector(
onTap: () {
ImageUploadController.showFilePickerButtonSheet(context,completer,setFile,assignCroppedImage);
},
child: Column(
children: [
Icon(
Iconsax.edit_2,
size: <span class="hljs-number">20</span>,
color: Colors.white,
),
<span class="hljs-keyword">const</span> Text(
<span class="hljs-string">'Edit'</span>,
style: TextStyle(
color: Colors.white,
fontSize: <span class="hljs-number">15</span>,
),
)
],
),
),
GestureDetector(
onTap: () {
removeFile();
},
child: Column(
children: [
Icon(
Iconsax.note_remove,
color: Colors.white,
size: <span class="hljs-number">20</span>,
),
<span class="hljs-keyword">const</span> Text(
<span class="hljs-string">'Remove'</span>,
style: TextStyle(
color: Colors.white,
fontSize: <span class="hljs-number">15</span>,
),
)
],
),
),
],
),
),
],
),
);
}
}
ImagePreviewer
(StatelessWidget): Similar toImagePickerTile
, this is aStatelessWidget
that displays content and triggers callbacks.
Properties:
size
: TheSize
of the parent widget, used to calculate theheight
of the preview container proportionally.pickedFile
: AFile?
representing the image file to be displayed. It’s nullable, implying that this widget might only show if a file has been picked.removeFile
: AFunction
callback to handle the removal of the currently displayed image.context
,completer
,setFile
,assignCroppedImage
: These are passed through to theImageUploadController
when the ‘Edit’ action is triggered, similar to theImagePickerTile
.
build()
Method:Container
: The primary container for the image preview.height
: Set to 13% of the screen height, providing a responsive size.width
:double.infinity
to take full available width.decoration
:borderRadius
: Applies rounded corners to the container.image: DecorationImage(...)
: This is where the magic happens. It displays thepickedFile
as a background image for the container.FileImage(File(pickedFile!.path))
: Creates an image provider from the local file path. The!
(null assertion operator) impliespickedFile
is expected to be non-null when this widget is displayed.fit: BoxFit.cover
: Ensures the image covers the entire container, potentially cropping parts of it.
Stack
: Layers content on top of the image.Container
(Overlay): A semi-transparent blackContainer
is placed on top of the image (Colors.black.withOpacity(0.3)
) to create a darkened overlay. This improves the readability of the white text and icons placed over the image.Center
: Centers the action buttons horizontally and vertically within the overlay.Wrap
: Arranges the ‘Edit’ and ‘Remove’ buttons horizontally with aspacing
of 20.WrapCrossAlignment.center
aligns them vertically within theWrap
.GestureDetector
(for ‘Edit’):onTap
: CallsImageUploadController.showFilePickerButtonSheet
, allowing the user to re-select or change the image. This method likely presents a bottom sheet with options to pick from the camera or gallery, similar to how the initial image picking works.Its child is a
Column
containing anIconsax.edit_2
icon and an ‘Edit’ text, both in white.
GestureDetector
(for ‘Remove’):onTap
: Calls theremoveFile()
callback, which would typically clear the selectedpickedFile
in the parent state, causing this previewer to disappear or revert to an upload state.Its child is a
Column
containing anIconsax.note_remove
icon and a ‘Remove’ text, both in white.
4. query_text_box.dart
This widget provides a styled TextFormField
for multi-line text input, typically used for user queries or notes.
<span class="hljs-keyword">import</span> <span class="hljs-string">'package:flutter/material.dart'</span>;
<span class="hljs-keyword">import</span> <span class="hljs-string">'../../core/constants/app_colors.dart'</span>;
<span class="hljs-class"><span class="hljs-keyword">class</span> <span class="hljs-title">QueryTextBox</span> <span class="hljs-keyword">extends</span> <span class="hljs-title">StatelessWidget</span> </span>{
<span class="hljs-keyword">const</span> QueryTextBox({
<span class="hljs-keyword">super</span>.key,
<span class="hljs-keyword">required</span> TextEditingController query,
}) : _query = query;
<span class="hljs-keyword">final</span> TextEditingController _query;
<span class="hljs-meta">@override</span>
Widget build(BuildContext context) {
<span class="hljs-keyword">return</span> TextFormField(
controller: _query,
maxLines: <span class="hljs-number">4</span>,
autofocus: <span class="hljs-keyword">true</span>,
decoration: InputDecoration(
hintStyle: TextStyle(color: AppColors.lighterGrey),
border: OutlineInputBorder(
borderRadius: BorderRadius.circular(<span class="hljs-number">12.0</span>),
borderSide: BorderSide(color: Colors.grey.shade400),
),
focusedBorder: OutlineInputBorder(
borderRadius: BorderRadius.circular(<span class="hljs-number">12.0</span>),
borderSide: <span class="hljs-keyword">const</span> BorderSide(
color: AppColors.primaryColor,
width: <span class="hljs-number">2.0</span>,
),
),
enabledBorder: OutlineInputBorder(
borderRadius: BorderRadius.circular(<span class="hljs-number">12.0</span>),
borderSide: BorderSide(color: Colors.grey.shade300),
),
contentPadding: <span class="hljs-keyword">const</span> EdgeInsets.symmetric(
vertical: <span class="hljs-number">12.0</span>,
horizontal: <span class="hljs-number">16.0</span>,
),
),
style: <span class="hljs-keyword">const</span> TextStyle(
fontSize: <span class="hljs-number">14.0</span>,
color: Colors.black,
),
keyboardType: TextInputType.multiline,
textInputAction: TextInputAction.newline,
);
}
}
QueryTextBox
(StatelessWidget): AStatelessWidget
that renders a text input field. It takes aTextEditingController
as a required parameter, allowing external control over the text field’s content.
Properties:
_query
(TextEditingController): The controller linked to theTextFormField
. This allows retrieving the text, setting initial text, and listening for changes.
build()
Method:TextFormField
: The core input widget.controller: _query
: Binds theTextEditingController
to this field.maxLines: 4
: Allows the text field to expand up to 4 lines before becoming scrollable.autofocus: true
: Automatically focuses the text field when the screen loads, bringing up the keyboard.decoration: InputDecoration(...)
: Defines the visual styling of the input field.hintStyle
: Sets the color of the hint text toAppColors.lighterGrey
.border
: Defines the default border when the field is not focused or enabled, with rounded corners and a light grey border.focusedBorder
: Defines the border style when the field is actively focused by the user. It usesAppColors.primaryColor
with a wider stroke (width: 2.0
) to provide a clear visual indicator of focus.enabledBorder
: Defines the border style when the field is enabled but not focused, using a slightly darker grey.contentPadding
: Adds internal padding within the text field for better spacing of the text.
style
: Sets the font size to 14.0 and color to black for the entered text.keyboardType: TextInputType.multiline
: Configures the keyboard to be suitable for multi-line text input, often providing a “return” key that creates a new line.textInputAction: TextInputAction.newline
: Specifies that pressing the “Done” or “Enter” key on the keyboard should insert a new line.
5. upload_container.dart
This widget creates a visually distinct “dotted border” container, typically used as a tappable area to trigger file upload or selection actions.
<span class="hljs-keyword">import</span> <span class="hljs-string">'package:dotted_border/dotted_border.dart'</span>;
<span class="hljs-keyword">import</span> <span class="hljs-string">'package:flutter/material.dart'</span>;
<span class="hljs-keyword">import</span> <span class="hljs-string">'package:gap/gap.dart'</span>;
<span class="hljs-keyword">import</span> <span class="hljs-string">'package:iconsax/iconsax.dart'</span>;
<span class="hljs-keyword">import</span> <span class="hljs-string">'../../core/constants/app_colors.dart'</span>;
<span class="hljs-class"><span class="hljs-keyword">class</span> <span class="hljs-title">UploadContainer</span> <span class="hljs-keyword">extends</span> <span class="hljs-title">StatelessWidget</span> </span>{
<span class="hljs-keyword">const</span> UploadContainer({
<span class="hljs-keyword">super</span>.key,
<span class="hljs-keyword">required</span> <span class="hljs-keyword">this</span>.size,
<span class="hljs-keyword">required</span> <span class="hljs-keyword">this</span>.title,
});
<span class="hljs-keyword">final</span> Size size;
<span class="hljs-keyword">final</span> <span class="hljs-built_in">String</span> title;
<span class="hljs-meta">@override</span>
Widget build(BuildContext context) {
<span class="hljs-keyword">return</span> DottedBorder(
color: AppColors.primaryColor,
radius: <span class="hljs-keyword">const</span> Radius.circular(<span class="hljs-number">15</span>),
borderType: BorderType.RRect,
strokeWidth: <span class="hljs-number">1</span>,
child: SizedBox(
height: size.height * <span class="hljs-number">0.13</span>,
width: <span class="hljs-built_in">double</span>.infinity,
child: Column(
mainAxisAlignment: MainAxisAlignment.center,
children: [
Container(
height: <span class="hljs-number">70</span>,
width: <span class="hljs-number">60</span>,
decoration: BoxDecoration(
shape: BoxShape.circle,
color: AppColors.litePrimary,
),
child: Padding(
padding: <span class="hljs-keyword">const</span> EdgeInsets.all(<span class="hljs-number">13.0</span>),
child: Icon(
Iconsax.document_upload,
color: AppColors.primaryColor,
),
),
),
<span class="hljs-keyword">const</span> Gap(<span class="hljs-number">5</span>),
RichText(
text: TextSpan(
text: <span class="hljs-string">'Click to select '</span>,
style: TextStyle(
color: AppColors.primaryColor,
),
children: [
TextSpan(
text: title,
style: TextStyle(
color: Color(<span class="hljs-number">0xff555555</span>),
),
)
],
),
),
],
),
),
);
}
}
UploadContainer
(StatelessWidget): AStatelessWidget
primarily for visual presentation, indicating an upload zone.
Properties:
size
: TheSize
of the parent, used to determine the container’s height proportionally.title
: AString
to be displayed as part of the “Click to select [title]” message.
build()
Method:DottedBorder
: This package provides the visual dotted border effect.color: AppColors.primaryColor
: The color of the dotted line.radius: const Radius.circular(15)
: Applies rounded corners to the dotted border.borderType: BorderType.RRect
: Specifies that the border should follow a rounded rectangle shape.strokeWidth: 1
: Sets the thickness of the dotted line.
SizedBox
: Defines the internal dimensions of the area within the dotted border, taking up 13% of the screen height and full width.Column
: Arranges the icon and text vertically, centered within theSizedBox
.Container
(Icon background): A circular container withAppColors.litePrimary
background holds the upload icon.Iconsax.document_upload
: The icon signifying an upload action, colored withAppColors.primaryColor
.
Gap(5)
: From thegap
package, this provides a small vertical space (5 pixels) between the icon and the text.RichText
: Allows for different styles within a single text block.TextSpan(text: 'Click to select ', ...)
: The first part of the message, styled withAppColors.primaryColor
.children: [TextSpan(text: title, ...)]
: The second part of the message, which is thetitle
property passed to the widget, styled in a darker grey. This structure allows “Click to select ” to be consistently styled while thetitle
(for example, “image”, “document”) can have a different appearance.
Summary of Code Implementation
We’ve covered a significant amount of ground in this part of the article, transforming our basic Flutter application into a powerful AI-powered recipe guide. We started by setting up the core UI, then delved into integrating the google_generative_ai
package to communicate with Google’s Gemini models for both image and voice input.
We implemented robust logic for:
Image input: Capturing images from the camera or gallery, cropping them, and sending them to the
gemini
model.Voice input: Recording audio and preparing the groundwork for transcription before sending text to the
gemini
model.Dynamic content display: Skillfully parsing the AI’s response to extract and present not just the recipe text, but also embedding YouTube instructional videos and even relevant images, all within a beautifully formatted dialog using
flutter_markdown
andcached_network_image
. We also ensured proper lifecycle management for our media players.
This highlights how easily you can leverage advanced AI capabilities like multimodal understanding and natural language generation within your Flutter applications. By building on these concepts, you can create truly interactive and intelligent user experiences.
Now that we have the core logic in place for capturing input, communicating with the AI, and displaying its rich responses, we need to ensure that our application can actually access the necessary device features.
Permissions: Ensuring App Functionality and User Privacy
For a Flutter application to interact with system features like the camera, microphone, or file storage, it must declare specific permissions in both its Android and iOS manifests. These declarations inform the operating system about the app’s requirements and, for sensitive permissions, prompt the user for consent at runtime.
Android1 Permissions (in android/app/src/main/AndroidManifest.xml
)
<span class="hljs-tag"><<span class="hljs-name">manifest</span> <span class="hljs-attr">xmlns:android</span>=<span class="hljs-string">"http://schemas.android.com/apk/res/android"</span>></span>
<span class="hljs-tag"><<span class="hljs-name">uses-permission</span> <span class="hljs-attr">android:name</span>=<span class="hljs-string">"android.permission.RECORD_AUDIO"</span>/></span>
<span class="hljs-tag"><<span class="hljs-name">uses-permission</span> <span class="hljs-attr">android:name</span>=<span class="hljs-string">"android.permission.CAMERA"</span> /></span>
<span class="hljs-tag"><<span class="hljs-name">uses-permission</span> <span class="hljs-attr">android:name</span>=<span class="hljs-string">"android.permission.INTERNET"</span> /></span>
<span class="hljs-tag"><<span class="hljs-name">uses-permission</span> <span class="hljs-attr">android:name</span>=<span class="hljs-string">"android.permission.READ_EXTERNAL_STORAGE"</span> /></span>
<span class="hljs-tag"></<span class="hljs-name">manifest</span>></span>
Here’s what’s going on:
<uses-permission android:name="android.permission.RECORD_AUDIO"/>
: This permission is necessary for the application to access the device’s microphone and record audio. It’s crucial for any speech recognition or voice input features, like theGlowingMicButton
implies.<uses-permission android:name="android.permission.CAMERA" />
: Grants the application access to the device’s camera. This is essential for features that allow users to take photos, such as those enabled byImagePickerTile
orImagePreviewer
.<uses-permission android:name="android.permission.INTERNET" />
: This is a fundamental permission required for almost any modern application that connects to the internet. It allows the app to send and receive data from web services, like interacting with the Gemini API, Firebase, or Vertex AI.<uses-permission android:name="android.permission.READ_EXTERNAL_STORAGE" />
: Allows the application to read files from the device’s shared external storage (for example, photos saved in the gallery). This is necessary when picking existing images from the gallery. For newer Android versions (Android 10+), scoped storage might change how this works, but for reading user-selected media, this declaration is still relevant. For writing to external storage,WRITE_EXTERNAL_STORAGE
would also be needed.
iOS Permissions (in ios/Runner/Info.plist
)
<span class="hljs-meta"><?xml version="1.0" encoding="UTF-8"?></span>
<span class="hljs-meta"><!DOCTYPE <span class="hljs-meta-keyword">plist</span> <span class="hljs-meta-keyword">PUBLIC</span> <span class="hljs-meta-string">"-//Apple//DTD PLIST 1.0//EN"</span> <span class="hljs-meta-string">"http://www.apple.com/DTDs/PropertyList-1.0.dtd"</span>></span>
<span class="hljs-tag"><<span class="hljs-name">plist</span> <span class="hljs-attr">version</span>=<span class="hljs-string">"1.0"</span>></span>
<span class="hljs-tag"><<span class="hljs-name">dict</span>></span>
<span class="hljs-tag"><<span class="hljs-name">key</span>></span>io.flutter.embedded_views_preview<span class="hljs-tag"></<span class="hljs-name">key</span>></span>
<span class="hljs-tag"><<span class="hljs-name">true</span>/></span>
<span class="hljs-tag"><<span class="hljs-name">key</span>></span>NSSpeechRecognitionUsageDescription<span class="hljs-tag"></<span class="hljs-name">key</span>></span>
<span class="hljs-tag"><<span class="hljs-name">string</span>></span>We need access to recognize your speech.<span class="hljs-tag"></<span class="hljs-name">string</span>></span>
<span class="hljs-tag"><<span class="hljs-name">key</span>></span>NSCameraUsageDescription<span class="hljs-tag"></<span class="hljs-name">key</span>></span>
<span class="hljs-tag"><<span class="hljs-name">string</span>></span>This app needs access to the camera to capture photos and videos.<span class="hljs-tag"></<span class="hljs-name">string</span>></span>
<span class="hljs-tag"><<span class="hljs-name">key</span>></span>NSMicrophoneUsageDescription<span class="hljs-tag"></<span class="hljs-name">key</span>></span>
<span class="hljs-tag"><<span class="hljs-name">string</span>></span>This app needs access to the microphone for audio recording.<span class="hljs-tag"></<span class="hljs-name">string</span>></span>
<span class="hljs-tag"><<span class="hljs-name">key</span>></span>NSPhotoLibraryUsageDescription<span class="hljs-tag"></<span class="hljs-name">key</span>></span>
<span class="hljs-tag"><<span class="hljs-name">string</span>></span>This app needs access to your photo library.<span class="hljs-tag"></<span class="hljs-name">string</span>></span>
<span class="hljs-tag"><<span class="hljs-name">key</span>></span>NSPhotoLibraryAddUsageDescription<span class="hljs-tag"></<span class="hljs-name">key</span>></span>
<span class="hljs-tag"><<span class="hljs-name">string</span>></span>This app needs permission to save photos to your photo library.<span class="hljs-tag"></<span class="hljs-name">string</span>></span>
<span class="hljs-tag"><<span class="hljs-name">key</span>></span>NSAppTransportSecurity<span class="hljs-tag"></<span class="hljs-name">key</span>></span>
<span class="hljs-tag"><<span class="hljs-name">dict</span>></span>
<span class="hljs-tag"><<span class="hljs-name">key</span>></span>NSAllowsArbitraryLoads<span class="hljs-tag"></<span class="hljs-name">key</span>></span>
<span class="hljs-tag"><<span class="hljs-name">true</span>/></span>
<span class="hljs-tag"></<span class="hljs-name">dict</span>></span>
<span class="hljs-tag"></<span class="hljs-name">dict</span>></span>
<span class="hljs-tag"></<span class="hljs-name">plist</span>></span>
Here’s what’s going on:
iOS permissions are declared in the Info.plist
file using specific keys (NS...UsageDescription
) and require a user-facing string explaining why the permission is needed. This string is displayed to the user when the app requests the permission.
<key>io.flutter.embedded_views_preview</key><true/>
: This key is often added when using Flutter plugins that integrate native UI components (for example, camera previews, webviews). It enables a preview of embedded native views during development.<key>NSSpeechRecognitionUsageDescription</key><string>We need access to recognize your speech.</string>
: This is the privacy description for speech recognition services (for example, Apple’s built-in speech recognizer). It’s crucial for features like voice input to work.<key>NSCameraUsageDescription</key><string>This app needs access to the camera to capture photos and videos.</string>
: The privacy description for camera access. This is required for capturing images via the camera, as used in the image picking functionality.<key>NSMicrophoneUsageDescription</key><string>This app needs access to the microphone for audio recording.</string>
: The privacy description for microphone access. Necessary for recording audio for speech input.<key>NSPhotoLibraryUsageDescription</key><string>This app needs access to your photo library.</string>
: The privacy description for reading from the user’s photo library. This is required when picking existing images or videos from the gallery.<key>NSPhotoLibraryAddUsageDescription</key><string>This app needs permission to save photos to your photo library.</string>
: The privacy description for writing to the user’s photo library. This would be needed if the app captures photos/videos and saves them directly to the device’s gallery.<key>NSAppTransportSecurity</key><dict><key>NSAllowsArbitraryLoads</key><true/></dict>
: This section relates to Apple’s App Transport Security (ATS). By default, ATS enforces secure connections (HTTPS). SettingNSAllowsArbitraryLoads
totrue
(as shown here) disables this enforcement, allowing the app to make insecure HTTP connections. While useful during development or for interacting with specific legacy APIs, it’s generally not recommended for production apps due to security implications. For production, you should ideally configure specific exceptions or ensure all network requests use HTTPS.
Assets: Managing Application Resources
Assets are files bundled with your application and are accessible at runtime. This typically includes images, fonts, audio files, and more.
In this application, we have an assets
folder, and inside it, an images
subfolder.
assets/
└── images/
├── placeholder.png
└── app_logo.png
placeholder.png
: This image is typically used as a temporary visual cue when actual content (like an image being loaded or picked) is not yet available. It provides a better user experience than a blank space.app_logo.png
: This is the primary logo of the application. It’s used for various purposes, including the app icon and the splash screen.
To ensure Flutter knows about these assets and bundles them with the application, you need to declare them in your pubspec.yaml
file:
<span class="hljs-attr">flutter:</span>
<span class="hljs-attr">uses-material-design:</span> <span class="hljs-literal">true</span>
<span class="hljs-attr">assets:</span>
<span class="hljs-bullet">-</span> <span class="hljs-string">assets/images/</span> <span class="hljs-comment"># This line tells Flutter to include all files in the assets/images/ directory</span>
App Icons: Customizing Your Application’s Identity
Flutter applications use the flutter_launcher_icons
package to simplify the process of generating app icons for different platforms and resolutions. This ensures your app has a consistent and professional look on both Android and iOS devices.
pubspec.yaml
Configuration for flutter_launcher_icons
<span class="hljs-attr">flutter_icons:</span>
<span class="hljs-attr">android:</span> <span class="hljs-string">"launcher_icon"</span>
<span class="hljs-attr">ios:</span> <span class="hljs-literal">true</span>
<span class="hljs-attr">image_path:</span> <span class="hljs-string">"assets/images/app_logo.png"</span>
<span class="hljs-attr">remove_alpha_ios:</span> <span class="hljs-literal">true</span>
<span class="hljs-attr">adaptive_icon_background:</span> <span class="hljs-string">"#FFFFFF"</span>
<span class="hljs-attr">adaptive_icon_foreground:</span> <span class="hljs-string">"assets/images/app_logo.png"</span>
Here’s what’s happening:
flutter_icons:
: This is the root key for theflutter_launcher_icons
package configuration.android: "launcher_icon"
: Specifies that Android launcher icons should be generated."launcher_icon"
is the default and usually sufficient.ios: true
: Enables the generation of iOS app icons.image_path: "assets/images/app_logo.png"
: This is the absolute path to your source image file that will be used to generate the icons. It’s crucial that this path is correct and points to a high-resolution square image.remove_alpha_ios: true
: For iOS, this option removes the alpha channel from the icon. iOS icons typically do not use an alpha channel for transparency.adaptive_icon_background: "#FFFFFF"
: This is specific to Android Adaptive Icons (introduced in Android 8.0 Oreo). It defines the background layer of the adaptive icon. Here, it’s set to white (#FFFFFF
).adaptive_icon_foreground: "assets/images/app_logo.png"
: This defines the foreground layer of the adaptive icon. It uses theapp_logo.png
again, which will be masked and scaled by the Android system.
Generating App Icons
After configuring pubspec.yaml
, you need to run the following commands in your terminal:
First, run dart run flutter_launcher_icons:generate
. This command generates a configuration file (often named flutter_launcher_icons.yaml
or similar, or directly processes the pubspec.yaml
) which flutter_launcher_icons
uses.
Correction: The prompt mentions “generate a config file and setup the image path to the path of the app_logo.png then run dart run flutter_launcher_icons to generate the assets”. It seems flutter_launcher_icons:generate
might be an older or specific command, the typical usage is to run flutter_launcher_icons
directly after setting image_path
in pubspec.yaml
. For the given configuration, the image_path
is already set in pubspec.yaml
.
Then, run dart run flutter_launcher_icons
. This command executes the flutter_launcher_icons
package, which takes the image_path
specified in pubspec.yaml
and generates all the necessary icon files at various resolutions for both Android and iOS, placing them in the correct native project directories.
Splash Screen: The First Impression
A splash screen (or launch screen) is the first screen users see when they open your app. It provides a branded experience while the app initializes resources. The flutter_native_splash
package simplifies creating native splash screens for Flutter apps.
pubspec.yaml
Configuration for flutter_native_splash
<span class="hljs-attr">flutter_native_splash:</span>
<span class="hljs-attr">color:</span> <span class="hljs-string">"#FFFFFF"</span>
<span class="hljs-attr">image:</span> <span class="hljs-string">assets/images/app_logo.png</span>
<span class="hljs-attr">android:</span> <span class="hljs-literal">true</span>
<span class="hljs-attr">android_gravity:</span> <span class="hljs-string">center</span>
<span class="hljs-attr">fullscreen:</span> <span class="hljs-literal">true</span>
<span class="hljs-attr">ios:</span> <span class="hljs-literal">true</span>
Here’s what’s happening:
flutter_native_splash:
: The root key for theflutter_native_splash
package configuration.color: "#FFFFFF"
: Sets the background color of the splash screen. Here, it’s set to white.image: assets/images/app_logo.png
: Specifies the path to the image that will be displayed on the splash screen. In this case, it’s the application’s logo.android: true
: Enables splash screen generation for Android.android_gravity: center
: For Android, this centers the splash image on the screen.fullscreen: true
: Makes the splash screen appear in fullscreen mode, without status or navigation bars.ios: true
: Enables splash screen generation for iOS.
Generating the Splash Screen
After configuring pubspec.yaml
, run the following command in your terminal: dart run flutter_native_splash:create
. It processes the configuration and generates the native splash screen files (for example, launch images, drawables) in the respective Android and iOS project folders, ensuring they are properly integrated into the native launch process.
Screenshots from the App
Keep in mind that the output quality can vary depending on the AI model you’re using. The same applies to YouTube links and image URLs – sometimes they work perfectly, and other times they may not. So if something doesn’t work as expected, it’s not necessarily on your end.
Also, remember there are so many ways to achieve this and you don’t necessarily use to use this method. I’ll provide some other resources you can check out below. You can use systemInstructions
instead of defining constraints in text the way I did it.
Here’s the completed project: https://github.com/Atuoha/snap2chef_ai
Wrapping Up
I hope this comprehensive breakdown has given you a clear understanding of the “Snap2Chef” application’s structure, UI components, and underlying configurations. May your coding journey be filled with creativity and successful implementations.
Happy coding!
References
Here are some references for the key technologies and packages used in this application:
Flutter Packages
flutter/material.dart
: The core Flutter Material Design package.- Reference: Flutter API Docs – material library
iconsax/iconsax.dart
: A custom icon set for Flutter.- Reference: pub.dev – iconsax
gap/gap.dart
: A simple package for adding spacing between widgets.- Reference: pub.dev – gap
dotted_border/dotted_border.dart
: A Flutter package to draw a dotted border around any widget.- Reference: pub.dev – dotted_border
flutter/cupertino.dart
: The core Flutter Cupertino (iOS-style) widgets package.- Reference: Flutter API Docs – cupertino library
flutter_launcher_icons
: A package for generating application launcher icons.- Reference: pub.dev – flutter_launcher_icons
flutter_native_splash
: A package for generating native splash screens.- Reference: pub.dev – flutter_native_splash
image_picker
(Implicitly used byImageUploadController
): A Flutter plugin for picking images from the image library, or taking new photos with the camera. (Though not directly imported in the provided snippets,ImageUploadController
likely uses this or a similar package).- Reference: pub.dev – image_picker
image_cropper
(Implicitly used byImageUploadController
): A Flutter plugin for cropping images. (Likely used in conjunction withimage_picker
forassignCroppedImage
).- Reference: pub.dev – image_cropper
APIs and Platforms
Gemini API: Google’s family of generative AI models.
Reference: Google AI Gemini API
Documentation: Google Cloud – Gemini API Documentation
Firebase: Google’s comprehensive app development platform.
Reference: Firebase Official Website
Documentation: Firebase Documentation
Firebase Console/Studio: The web-based interface for managing Firebase projects.
Vertex AI: Google Cloud’s machine learning platform.
Reference: Google Cloud – Vertex AI
Documentation: Google Cloud – Vertex AI Documentation
Source: freeCodeCamp Programming Tutorials: Python, JavaScript, Git & MoreÂ