Hot topics close

Here's everything Google has announced at I/O so far

Heres everything Google has announced at IO so far
Here are quick hits of the biggest news from the keynote as they are announced.

It’s that moment you’ve been waiting for all year: Google I/O keynote day! Google kicks off its developer conference each year with a rapid-fire stream of announcements, including many unveilings of recent things it’s been working on. Brian already kicked us off by sharing what we are expecting.

We know you don’t always have time to watch the whole two-hour presentation today, so we’re taking that on and will deliver quick hits of the biggest news from the keynote as they are announced, all in an easy-to-digest, easy-to-skim list. Here we go!

Google Maps
Google's new Immersive View for Routes feature

Image Credits: Google

Google Maps unveiled a new “Immersive View for Routes” feature in select cities. The new feature brings all of the information that a user may need into one place, including details about traffic simulations, bike lanes, complex intersections, parking and more. Read more.

Magic Editor and Magic Compose

We’re always wanting to change something about the photo we just took, and now Google’s Magic Editor feature is AI-powered for more complex edits in specific parts of the photos, for example the foreground or background and can also fill in gaps in the photo or even reposition the subject for a better-framed shot. Check it out.

There is also a new feature called Magic Compose, demoed today, that shows it being used with messages and conversations to rewrite texts in different styles. “For example, the feature could make the message sound more positive or more professional, or you could just have fun with it and make the message sound like it was “written by your favorite playwright,” aka Shakespeare,” Sarah writes. Read more.

PaLM 2

Image Credits: Google

Frederic has your look at PaLM 2, Google’s newest large language model (LLM). He writes “PaLM 2 will power Google’s updated Bard chat tool, the company’s competitor to OpenAI’s ChatGPT, and function as the foundation model for most of the new AI features the company is announcing today.” PaLM 2 also now features improved support for writing and debugging code. More here. Also, Kyle takes a deeper dive into PaLM 2 with a more critical look at the model through the lens of a Google-authored research paper.

Bard gets smarter

Good news: Google is not only removing its waitlist for Bard and making its available, in English, in over 180 countries and territories, but it’s also launching support for Japanese and Korean with a goal of supporting 40 languages in the near future. Also new is Bard’s ability to surface images in its responses. Find out more. In addition, Google is partnering with Adobe for some art generation capabilities via Bard. Kyle writes that “Bard users will be able to generate images via Firefly and then modify them using Express. Within Bard, users will be able to choose from templates, fonts and stock images as well as other assets from the Express library.”

Workspace
Google Workspace Icons

Image Credits: TechCrunch

Google’s Workspace suite is also getting the AI touch to make it smarter, with the addition of an automatic table (but not formula) generation in Sheets and image creation in Slides and Meet. Initially, the automatic table is simpler, though Frederic notes there is more to come with regard to using AI to create formulas. The new features for Slides and Meet include the ability to type in what kind of visualization you are looking for, and the AI will create that image. Specifically for Google Meet, that means custom backgrounds. Check out more.

MusicLM
Google MusicLM

Image Credits: Google

MusicLM is Google’s new experimental AI tool that turns text into music. Kyle writes that for example, if you are hosting a dinner party, you can simply type, “soulful jazz for a dinner party” and have the tool create several versions of the song. Read more.

Search

Google Search has two new features surrounding better understanding of content and the context of an image the user is viewing in the search results. Sarah reports that this includes more information with an “About this Image” feature and new markup in the file itself that will allow images to be labeled as “AI-generated.” Both of these are extensions of work already going on, but is meant to provide more transparency on if the “image is credible or AI-generated,” albeit not an end-all-be-all of addressing the larger problem of AI image misinformation.

Aisha has more on Search, including that Google is experimenting with an AI-powered conversational mode and described the experience as, “users will see suggested next steps when conducting a search and display an AI-powered snapshot of key information to consider, with links to dig deeper. When you tap on a suggested next step, Search takes you to a new conversational mode, where you can ask Google more about the topic you’re exploring. Context will be carried over from question to question.”

Sidekick

Image Credits: Google

Darrell has your look at a new tool unveiled today called Sidekick, writing that it is designed “to help provide better prompts, potentially usurping the one thing people are supposed to be able to do best in the whole generative AI loop.” The Sidekick panel will live in a side panel in Google Docs and is “constantly engaged in reading and processing your entire document as you write, providing contextual suggestions that refer specifically to what you’re written.”

Codey

We like the name of Google’s new code completion and code generation tool, Codey. It’s part of a number of AI-centric coding tools being launched today and is also Google’s answer to GitHub’s Copilot, a chat tool used for asking questions about coding. Codey is specifically trained to handle coding-related prompts and is also trained to handle queries related to Google Cloud in general. Read more.

Google Cloud

There’s a new A3 supercomputer virtual machine in town. Ron writes that “this A3 has been purpose-built to handle the considerable demands of these resource-hungry use cases,” noting that A3 is “armed with NVIDIA’s H100 GPUs and combining that with a specialized data center to derive immense computational power with high throughput and low latency, all at what they suggest is a more reasonable price point than you would typically pay for such a package.”

Imagen in Vertex

Google also announced new AI models heading to Vertex AI, its fully managed AI service, including a text-to-image model called Imagen. Kyle writes that Imagen was previewed via Google’s AI Test Kitchen app last November. It can generate and edit images as well as write captions for existing images.

Find My Device

Image Credits: TechCrunch

Piggy-backing on Apple and Google teaming up on Bluetooth tracker safety measures and a new specification, Google introduced its own series of improvements to its own Find My Device network, including proactive alerts about unknown trackers traveling with you with support for Apple’s AirTag and others. Some of the new features will include notifying users if their phone detects an unknown tracker moving with them, but also connectivity with other Bluetooth trackers. Google’s goal with the upgrades is the “offer increased safety and security for their own respective user bases by making these alerts work across platforms in the same way — meaning, for example, the work Apple did to make AirTags safer following reports they were being used for stalking would also make its way to Android devices,” Sarah writes.

Pixel 7a

Image Credits: Google

Google’s Pixel 7a goes on sale May 11 at $100 less than the Pixel 7 ($499). Like the Pixel 6a, it has the 6.1-inch screen versus the 6.4-inch Pixel 7. It also launched in India. When it comes to the camera, it has a slightly higher pixel density, but Brian said “I really miss the flexibility and zoom of the 7 Pro, but I was able to grab some nice shots around my neighborhood with the 7a’s cameras.” Its new chip does enable features like Face Unblur and Super Res Zoom. Find the full breakdown here.

Project Tailwind

The names sounds more like an undercover government assignment, but to Google, Project Tailwind is an AI-powered notebook tool it is building with the aim of taking a user’s freeform notes and automatically organizing and summarizing them. The tool is available through Labs, Google’s refreshed hub for experimental products. Here’s how it works: users pick files from Google Drive, then Project Tailwind creates a private AI model with expertise in that information, along with a personalized interface designed to help sift through the notes and docs. Check it out.

Generative AI wallpapers

Now that you got that new Pixel 7a in your hand, you have to make it pretty! Google will roll out generative AI wallpapers this fall that will enable Android users to answer suggested prompts to describe your vision. The feature will use Google’s text-to-image diffusion models to generate new and original wallpapers, and the color palette of your Android system will automatically match the wallpaper you’ve selected. More here.

Wear OS 4

Google debuted the next version of its smartwatch operating system, Wear OS 4. Here’s what you’ll notice: improved battery life and functionality and new accessibility features, like text-to-speech. Developers also have some new tools to build new Wear OS watch faces and publish them to Google Play. Watch for Wear OS 4 to launch later this year. Read more.

Stay tuned for more developments as the day “unfolds,” get it?

Read more about Google I/O 2023 on TechCrunch

Similar shots
News Archive
This week's most popular shots