Building an AI mobile application with HugginFace and Stable Diffusion Model
In this tutorial, we will create a React-based mobile app using Framework7 to generate and share AI-generated images on social media platforms. The app allows users to input a query, fetch an image from an AI service, and share it on Facebook or Instagram. We’ll cover topics such as UI components, state management, and social media integration.
Get Started
First, let’s clone this Github repo to your local directory and then we will walk through the important part and codebase of the project. Here is the application we are building.
Routing/Navigation
It establishes route configurations for a Framework7 application. It defines two routes: one for the home page (/) and another for the history page (/history/). Each route is associated with a React component (HomePage and HistoryPage), facilitating navigation between these pages in the application.
Home Page
For the `UI part`, it is structured using Framework7 components. It consists of a Navbar, input field, “Go” button, image display area, and social media sharing buttons. The layout is responsive, and components are styled using utility classes.
The social media sharing buttons only run on mobile devices.
The `generatePicture` function fetches an image from an AI service based on the user’s input. It handles loading states, error conditions, and dispatches a Redux action to store the generated image in the array and localstorage to display them later in the History page.
Here is the `getImageFromAI` function from `src/js/util.js`.
It is simply a fetch api to `huggingface` with the `stable-diffusion-xl-base-1.0` model. The response from the API is extracted as a Blob using the blob() method. A Blob represents binary data, which in this case, contains the generated image. Next, to display on the Image element, we need to convert to Base64 with the `convertToBase64` function.
It takes a Blob of binary data as input and returns a Promise. It uses the FileReader API to read the binary data as a Data URL, which is a Base64-encoded string representation of the data. The Promise resolves with the Base64-encoded data once the reading operation is completed.
Now let’s look at the `store` object where we manage the “database” for this application.
We define `images` as the property of the `state` for storing all generated images. It read the data (if any) from localstorage via the `readFromLocalStorage` function. The `addImage` is the function to add new image (in which we called from the Home page `store.dispatch(‘addImage’, {…})`) to the state and at the same time saving to localstorage via the `writeToLocalStorage` function.
Social Media Sharing Plugin
If you are running the application on mobile phone, you can share the generated image to your social media platform — facebook or instagram. This is done with the `shareFacebook` and `shareInstagram` function defined in `src/js/utils.js` file.
If you are running on browser, it will display an alert message.
To use this sharing feature, we need to install `cordova-plugin-x-socialsharing` cordova plugin. The usage is very straightforward as written above.
History Page
Finally, let’s look at the History Page. The page display all generated images and if clicked on, will share the image to Instagram account.
It fetches the array of images from the store, displays them in a list with titles, dates, and clickable thumbnails. The component handles both the case when there are images in the history and when there are none, providing a message in the latter scenario. Clicking on an image triggers a sharing function.
Running On Mobile
If you want to make it as mobile application, you can sign up a free account with Monaca and import this project. You can then run it on both android and iPhone devices in no time.
Conclusion
In this article, we have learned how to use HugginFace to develop a simple AI-image-generation mobile application. This is just a simple introduction and a small part of what HuggingFace can offer. Please explore it more.
Happy Coding.