Apple launches AI model on Github

• Favorites: two


Proof that Apple is investing massively in artificial intelligence? A new AI image editing model has just been presented. The conference document was published on the occasion of ICLR 2024. Development is done in open source.

PWith small touches, Apple’s advances in artificial intelligence are coming to light. The Californian company has just put it online MGIE (Guiding instruction-based image editing through large language multimodal models).

This model allows you to modify an image using AI instructions given in a completely natural way: the changes can be very general or more elaborate. For example, you can change a person’s clothes or erase certain elements from a photo (like the magic eraser on Google’s Pixel phones).

The model was developed in collaboration with researchers at UC Santa Barbara. The idea here is to explore AI for image editing, but guiding the retouching with text.

For Apple, it’s about demonstrating that we can significantly improve the control and accessibility of visual manipulation following human commands.

  • The conference document is available in PDF
  • You can follow the development on Github
  • A demo (via a web browser) is available from This page on Huggingface. Good news: This demo not only works in Safari, but also in Firefox and for Chromium-based browsers.

Leave a Comment