Feed aggregator

Android's Circle to Search can now help students solve math and physics homework

Engadget - Tue, 05/14/2024 - 13:02

Google has introduced another capability for its Circle to Search feature at the company's annual I/O developer conference, and it's something that could help students better understand potentially difficult class topics. The feature will now be able to show them step-by-step instructions for a "range of physics and math word problems." They just have to activate the feature by long-pressing the home button or navigation bar and then circling the problem that's got them stumped, though some math problems will require users to be signed up for Google's experimental Search Labs feature.

The company says Circle to Search's new capability was made possible by its new family of AI models called LearnLM that was specifically created and fine-tuned for learning. It's also planning to make adjustments to this particular capability and to roll out an upgraded version later this year that could solve even more complex problems "involving symbolic formulas, diagrams, graphs and more." Google launched Circle to Search earlier this year at a Samsung Unpacked event, because the feature was initially available on Galaxy 24, as well as on Pixel 8 devices. It's now also out for the Galaxy S23, Galaxy S22, Z Fold, Z Flip, Pixel 6 and Pixel 7 devices, and it'll likely make its way to more hardware in the future. 

In addition to the new Circle to Search capability, Google has also revealed that devices that can support the Gemini for Android chatbot assistant will now be able to bring it up as an overlay on top of the application that's currently open. Users can then drag and drop images straight from the overlay into apps like Gmail, for instance, or use the overlay to look up information without having to swipe away from whatever they're doing. They can tap "Ask this video" to find specific information within a YouTube video that's open, and if they have access to Gemini Advanced, they can use the "Ask this PDF" option to find information from within lengthy documents. 

Google is also rolling out multimodal capabilities to Nano, the smallest model in the Gemini family that can process information on-device. The updated Gemini Nano, which will be able to process sights, sounds and spoken language, is coming to Google's TalkBack screen reader later this year. Gemini Nano will enable TalkBack to describe images onscreen more quickly and even without an internet connection. Finally, Google is currently testing a Gemini Nano feature that can alert users while a call is ongoing if it detects common conversation patterns associated with scams. Users will be alerted, for instance, if they're talking to someone asking them for their PINs or passwords or to someone asking them to buy gift cards. 

Catch up on all the news from Google I/O 2024 right here!

This article originally appeared on Engadget at https://www.engadget.com/androids-circle-to-search-can-now-help-students-solve-math-and-physics-homework-180223229.html?src=rss
Categories: Technology

Google's Gemini will search your videos to help you solve problems

Engadget - Tue, 05/14/2024 - 12:52

As part of its push toward adding generative AI to search, Google has introduced a new twist: video. Gemini will let you upload video that demonstrates an issue you're trying to resolve, then scour user forums and other areas of the internet to find a solution. 

As an example, Google's Rose Yao talked onstage at I/O 2024 about a used turntable she bought and how she couldn't get the needle to sit on the record. Yao uploaded a video showing the issue, then Gemini quickly found an explainer describing how to balance the arm on that particular make and model. 

Google

"Search is so much more than just words in a text box. Often the questions you have are about the things you see around you, including objects in motion," Google wrote. "Searching with video saves you the time and trouble of finding the right words to describe this issue, and you’ll get an AI Overview with steps and resources to troubleshoot."

If the video alone doesn't make it clear what you're trying to figure out, you can add text or draw arrows that point to the issue in question. 

OpenAI just introduced ChatGPT 4o with the ability to interpret live video in real time, then describe a scene or even sing a song about it. Google, however, is taking a different tack with video by focusing on its Search product for now. Searching with video is coming to Search Labs US users in English to start with, but will expand to more regions over time, the company said.

Catch up on all the news from Google I/O 2024 right here!

This article originally appeared on Engadget at https://www.engadget.com/googles-gemini-will-search-your-videos-to-help-you-solve-problems-175235105.html?src=rss
Categories: Technology

Google Search will now show AI-generated answers to millions by default

Engadget - Tue, 05/14/2024 - 12:45

Google is shaking up Search. On Tuesday, the company announced big new AI-powered changes to the world’s dominant search engine at I/O, Google’s annual conference for developers. With the new features, Google is positioning Search as more than a way to simply find websites. Instead, the company wants people to use its search engine to directly get answers and help them with planning events and brainstorming ideas.

“[With] generative AI, Search can do more than you ever imagined,” wrote Liz Reid, vice president and head of Google Search, in a blog post. “So you can ask whatever’s on your mind or whatever you need to get done — from researching to planning to brainstorming — and Google will take care of the legwork.”

Google’s changes to Search, the primary way that the company makes money, are a response to the explosion of generative AI ever since OpenAI’s ChatGPT released at the end of 2022. Since then, a handful of AI-powered apps and services including ChatGPT, Anthropic, Perplexity, and Microsoft’s Bing, which is powered by OpenAI’s GPT-4, have challenged Google’s flagship service by directly providing answers to questions instead of simply presenting people a list of links. This is the gap that Google is racing to bridge with its new features in Search.

Starting today, Google will show complete AI-generated answers in response to most search queries at the top of the results page in the US. Google first unveiled the feature a year ago at Google I/O in 2023, but so far, anyone who wanted to use the feature had to sign up for it as part of the company’s Search Labs platform that lets people try out upcoming features ahead of their general release. Google is now making AI Overviews available to hundreds of millions of Americans, and says that it expects it to be available in more countries to over a billion people by the end of the year. Reid wrote that people who opted to try the feature through Search Labs have used it “billions of times” so far, and said that any links included as part of the AI-generated answers get more clicks than if the page had appeared as a traditional web listing, something that publishers have been concerned about. “As we expand this experience, we’ll continue to focus on sending valuable traffic to publishers and creators,” Reid wrote. 

In addition to AI Overviews, searching for certain queries around dining and recipes, and later with movies, music, books, hotels, shopping and more in English in the US will show a new search page where results are organized using AI. “[When] you’re looking for ideas, Search will use generate AI to brainstorm with you and create an AI-organized results page that makes it easy to explore,” Reid said in the blog post.

Google

If you opt in to Search Labs, you’ll be able to access even more features powered by generative AI in Google Search. You’ll be able to get AI Overview to simplify the language or break down a complex topic in more detail. Here’s an example of a query asking Google to explain, for instance, the connection between lightning and thunder.

Google

Search Labs testers will also be able to ask Google really complex questions in a single query to get answers on a single page instead of having to do multiple searches. The example that Google’s blog post gives: “Find the best yoga or pilates studios in Boston and show details on their intro offers and walking time from Beacon Hill.” In response, Google shows the highest-rated yoga and pilates studios near Boston’s Beacon Hill neighborhood and even puts them on a map for easy navigation.

Google

Google also wants to become a meal and vacation planner by letting people who sign up for Search Labs ask queries like “create a 3 day meal plan for a group that’s easy to prepare” and letting you swap out individual results in its AI-generated plan with something else (swapping a meat-based dish in a meal plan for a vegetarian one, for instance).

Google

Finally, Google will eventually let anyone who signs up for Search Labs use a video as a search query instead of text or images. “Maybe you bought a record player at a thriftshop, but it’s not working when you turn it on and the metal piece with the needle is drifting unexpectedly,” wrote Reid in Google’s blog post. “Searching with video saves you the time and trouble of finding the right words to describe this issue, and you’ll get an AI Overview with steps and resources to troubleshoot.”

Google said that all these new capabilities are powered by a brand new Gemini model customized for Search that combines Gemini’s advanced multi-step reasoning and multimodal abilities with Google’s traditional search systems.

Catch up on all the news from Google I/O 2024 right here!

This article originally appeared on Engadget at https://www.engadget.com/google-search-will-now-show-ai-generated-answers-to-millions-by-default-174512845.html?src=rss
Categories: Technology

AI in Gmail will sift through emails, provide search summaries, send emails

Ars Technica - Tue, 05/14/2024 - 12:44

Google's Gemini AI often just feels like a chatbot built into a text-input field, but you can really start to do special things when you give it access to a ton of data. Gemini in Gmail will soon be able to search through your entire backlog of emails and show a summary in a sidebar.

That's simple to describe but solves a huge problem with email: even searching brings up a list of email subjects, and you have to click-through to each one just to read it. Having an AI sift through a bunch of emails and provide a summary sounds like a huge time saver and something you can't do with any other interface.

Google's one-minute demo of this feature showed a big blue Gemini button at the top right of the Gmail web app. Tapping it opens the normal chatbot sidebar you can type in. Asking for a summary of emails from a certain contact will get you a bullet-point list of what has been happening, with a list of "sources" at the bottom that will jump you right to a certain email. In the last second of the demo, the user types, "Reply saying I want to volunteer for the parent's group event," hits "enter," and then the chatbot instantly, without confirmation, sends an email.

Read 2 remaining paragraphs | Comments

Categories: Technology

Google unveils Veo and Imagen 3, its latest AI media creation models

Engadget - Tue, 05/14/2024 - 12:36

It's all AI all the time at Google I/O! Today, Google announced its new AI media creation engines: Veo, which can produce "high-quality" 1080p videos; and Imagen 3, its latest text-to-image framework. Neither sound particularly revolutionary, but they're a way for Google to keep up the fight against OpenAI's Sora video model and Dall-E 3, a tool that has practically become synonymous with AI-generated images.

Google claims Veo has "an advanced understanding of natural language and visual semantics" to create whatever video you have in mind. The AI generated videos can last "beyond a minute." Veo is also capable of understanding cinematic and visual techniques, like the concept of a timelapse. But really, that should be table stakes for an AI video generation model, right?

To prove that Veo isn't out to steal artist's jobs, Google has also partnered with Donald Glover and Gilga, his creative studio, to show off the model's capabilities. In a very brief promotional video, we see Glover and crew using text to create video of a convertible arriving at a European home, and a sailboat gliding through the ocean. According to Google, Veo can simulate real-world physics better than its previous models, and it's also improved how it renders high-definition footage.

"Everybody's going to become a director, and everybody should be a director," Glover says in the video, absolutely earning his Google paycheck. "At the heart of all of this is just storytelling. The closer we are to be able to tell each other our stories, the more we'll understand each other."

It remains to be seen if anyone will actually want to watch AI generated video, outside of the morbid curiosity of seeing a machine attempt to algorithmically recreate the work of human artists. But that's not stopping Google or OpenAI from promoting these tools and hoping they'll be useful (or at least, make a bunch of money). Veo will be available inside of Google's VideoFX tool today for some creators, and the company says it'll also be coming to YouTube Shorts and other products. If Veo does end up becoming a built-in part of YouTube Shorts, that's at least one feature Google can lord over TikTok.

Google

As for Imagen 3, Google is making the usual promises: It's said to be the company's "highest quality" text-to-image model, with "incredible level of detail" for "photorealistic, lifelike images" and fewer artifacts. The real test, of course, will be to see how it handles prompts compared to Dall-E 3. Imagen 3 handles text better than before, Google says, and it's also smarter about handling details from long prompts.

Google is also working with recording artists like Wyclef Jean and Bjorn to test out its Music AI Sandbox, a set of tools that can help with song and beat creation. We only saw a brief glimpse of this, but it's led to a few intriguing demos: 

The sun rises and sets. We're all slowly dying. And AI is getting smarter by the day. That seems to be the big takeaway from Google's latest media creation tools. Of course they're getting better! Google is pouring billions into making the dream of AI a reality, all in a bid to own the next great leap for computing. Will any of this actually make our lives better? Will they ever be able to generate art with genuine soul? Check back at Google I/O every year until AGI actually appears, or our civilization collapses.

Catch up on all the news from Google I/O 2024 right here!

This article originally appeared on Engadget at https://www.engadget.com/google-unveils-veo-and-imagen-3-its-latest-ai-media-creation-models-173617373.html?src=rss
Categories: Technology

5,471-piece Lego Barad-Dûr set will turn its watchful Eye to us in June

Ars Technica - Tue, 05/14/2024 - 12:33

Here's something for any Lord of the Rings fan with a tall, narrow space available on their tchotchkes shelf: Lego has announced a $460, 5,471-piece rendition of Barad-Dûr, which viewers of the films will recognize as "that giant black tower with the flaming eye on top of it."

Sauron, Base Master of Treachery, will keep his Eye on you from atop the tower, which will actually glow thanks to a built-in light brick. The tower includes a minifig of Sauron himself, plus the Mouth of Sauron, Gollum, and a handful of Orcs.

The Lego Barad-Dûr set will launch on June 1 for Lego Insiders and June 4 for everybody else. If you buy it between June 1 and June 7, you'll also get the "Fell Beast" bonus set, with pose-able wings and a Nazgûl minifig. It doesn't seem as though this bonus set will be sold separately, making it much harder to buy the nine Nazgûl you would need to make your collection story-accurate.

Read 4 remaining paragraphs | Comments

Categories: Technology

Google just snuck a pair of AR glasses into a Project Astra demo at I/O

Engadget - Tue, 05/14/2024 - 12:28

In a video showcasing the prowess of Google's new Project Astra experience at I/O 2024, an unnamed person demonstrating asked Gemini "do you remember where you saw my glasses?" The AI impressively responded "Yes, I do. Your glasses were on a desk near a red apple," despite said object not actually being in view when the question was asked. But these glasses weren't your bog-standard assistive vision aid; these had a camera onboard and some sort of visual interface!

The tester picked up their glasses and put them on, and proceeded to ask the AI more questions about things they were looking at. Clearly, there is a camera on the device that's helping it take in the surroundings, and we were shown some sort of interface where a waveform moved to indicate it was listening. Onscreen captions appeared to reflect the answer that was being read aloud to the wearer, as well. So if we're keeping track, that's at least a microphone and speaker onboard too, along with some kind of processor and battery to power the whole thing. 

We only caught a brief glimpse of the wearable, but from the sneaky seconds it was in view, a few things were evident. The glasses had a simple black frame and didn't look at all like Google Glass. They didn't appear very bulky, either. 

In all likelihood, Google is not ready to actually launch a pair of glasses at I/O. It breezed right past the wearable's appearance and barely mentioned them, only to say that Project Astra and the company's vision of "universal agents" could come to devices like our phones or glasses. We don't know much else at the moment, but if you've been mourning Google Glass or the company's other failed wearable products, this might instill some hope yet.

Catch up on all the news from Google I/O 2024 right here!

This article originally appeared on Engadget at https://www.engadget.com/google-just-snuck-a-pair-of-ar-glasses-into-a-project-astra-demo-at-io-172824539.html?src=rss
Categories: Technology

Google's Project Astra uses your phone's camera and AI to find noise makers, misplaced items and more.

Engadget - Tue, 05/14/2024 - 12:28

When Google first showcased its Duplex voice assistant technology at its developer conference in 2018, it was both impressive and concerning. Today, at I/O 2024, the company may be bringing up those same reactions again, this time by showing off another application of its AI smarts with something called Project Astra. 

The company couldn't even wait till its keynote today to tease Project Astra, posting a video to its social media of a camera-based AI app yesterday. At its keynote today, though, Google's DeepMind CEO Demis Hassabis shared that his team has "always wanted to develop universal AI agents that can be helpful in everyday life." Project Astra is the result of progress on that front. 

What is Project Astra?

According to a video that Google showed during a media briefing yesterday, Project Astra appeared to be an app which has a viewfinder as its main interface. A person holding up a phone pointed its camera at various parts of an office and verbally said "Tell me when you see something that makes sound." When a speaker next to a monitor came into view, Gemini responded "I see a speaker, which makes sound."

The person behind the phone stopped and drew an onscreen arrow to the top circle on the speaker and said, "What is that part of the speaker called?" Gemini promptly responded "That is the tweeter. It produces high-frequency sounds."

Then, in the video that Google said was recorded in a single take, the tester moved over to a cup of crayons further down the table and asked "Give me a creative alliteration about these," to which Gemini said "Creative crayons color cheerfully. They certainly craft colorful creations."

Wait, were those Project Astra glasses? Is Google Glass back?

The rest of the video goes on to show Gemini in Project Astra identifying and explaining parts of code on a monitor, telling the user what neighborhood they were in based on the view out the window. Most impressively, Astra was able to answer "Do you remember where you saw my glasses?" even though said glasses were completely out of frame and were not previously pointed out. "Yes, I do," Gemini said, adding "Your glasses were on a desk near a red apple."

After Astra located those glasses, the tester put them on and the video shifted to the perspective of what you'd see on the wearable. Using a camera onboard, the glasses scanned the wearer's surroundings to see things like a diagram on a whiteboard. The person in the video then asked "What can I add here to make this system faster?" As they spoke, an onscreen waveform moved to indicate it was listening, and as it responded, text captions appeared in tandem. Astra said "Adding a cache between the server and database could improve speed."

The tester then looked over to a pair of cats doodled on the board and asked "What does this remind you of?" Astra said "Schrodinger's cat." Finally, they picked up a plush tiger toy, put it next to a cute golden retriever and asked for "a band name for this duo." Astra dutifully replied "Golden stripes."

How does Project Astra work?

This means that not only was Astra processing visual data in realtime, it was also remembering what it saw and working with an impressive backlog of stored information. This was achieved, according to Hassabis, because these "agents" were "designed to process information faster by continuously encoding video frames, combining the video and speech input into a timeline of events, and caching this information for efficient recall."

It was also worth noting that, at least in the video, Astra was responding quickly. Hassabis noted in a blog post that "While we’ve made incredible progress developing AI systems that can understand multimodal information, getting response time down to something conversational is a difficult engineering challenge."

Google has also been working on giving its AI more range of vocal expression, using its speech models to "enhanced how they sound, giving the agents a wider range of intonations." This sort of mimicry of human expressiveness in responses is reminiscent of Duplex's pauses and utterances that led people to think Google's AI might be a candidate for the Turing test.

When will Project Astra be available?

While Astra remains an early feature with no discernible plans for launch, Hassabis wrote that in future, these assistants could be available "through your phone or glasses." No word yet on whether those glasses are actually a product or the successor to Google Glass, but Hassabis did write that "some of these capabilities are coming to Google products, like the Gemini app, later this year."

Catch up on all the news from Google I/O 2024 right here!

This article originally appeared on Engadget at https://www.engadget.com/googles-project-astra-uses-your-phones-camera-and-ai-to-find-noise-makers-misplaced-items-and-more-172642329.html?src=rss
Categories: Technology

Google's new Gemini 1.5 Flash AI model is lighter than Gemini Pro and more accessible

Engadget - Tue, 05/14/2024 - 12:23

Google announced updates to its Gemini family of AI models at I/O, the company’s annual conference for developers, on Tuesday. It’s rolling out a new model called Gemini 1.5 Flash, which it says is optimized for speed and efficiency.

“[Gemini] 1.5 Flash excels at summarization, chat applications, image and video captioning, data extraction from long documents and tables, and more,” wrote Demis Hassabis, CEO of Google DeepMind, in a blog post. Hassabis added that Google created Gemini 1.5 Flash because developers needed a model that was lighter and less expensive than the Pro version, which Google announced in February. Gemini 1.5 Pro is more efficient and powerful than the company’s original Gemini model announced late last year.

Gemini 1.5 Flash sits between Gemini 1.5 Pro and Gemini 1.5 Nano, Google’s smallest model that runs locally on devices. Despite being lighter weight then Gemini Pro, however, it is just as powerful. Google said that this was achieved through a process called “distillation”, where the most essential knowledge and skills from Gemini 1.5 Pro were transferred to the smaller model. This means that Gemini 1.5 Flash will get the same multimodal capabilities of Pro, as well as its long context window – the amount of data that an AI model can ingest at once – of one million tokens. This, according to Google, means that Gemini 1.5 Flash will be capable of analyzing a 1,500-page document or a codebase with more than 30,000 lines at once. 

Gemini 1.5 Flash (or any of these models) aren’t really meant for consumers. Instead, it’s a faster and less expensive way for developers building their own AI products and services using tech designed by Google.

In addition to launching Gemini 1.5 Flash, Google is also upgrading Gemini 1.5 Pro. The company said that it had “enhanced” the model’s abilities to write code, reason and parse audio and images. But the biggest update is yet to come – Google announced it will double the model’s existing context window to two million tokens later this year. That would make it capable of processing two hours of video, 22 hours of audio, more than 60,000 lines of code or more than 1.4 million words at the same time.

Both Gemini 1.5 Flash and Pro are now available in public preview in Google’s AI Studio and Vertex AI. The company also announced today a new version of its Gemma open model, called Gemma 2. But unless you’re a developer or someone who likes to tinker around with building AI apps and services, these updates aren’t really meant for the average consumer.

Catch up on all the news from Google I/O 2024 right here!

This article originally appeared on Engadget at https://www.engadget.com/googles-new-gemini-15-flash-ai-model-is-lighter-than-gemini-pro-and-more-accessible-172353657.html?src=rss
Categories: Technology

Feds probe Waymo driverless cars hitting parked cars, drifting into traffic

Ars Technica - Tue, 05/14/2024 - 12:13

Enlarge / A Waymo self-driving car in downtown San Francisco on Bush and Sansome Streets as it drives and transports passengers. (credit: JasonDoiy | iStock Unreleased)

Crashing into parked cars, drifting over into oncoming traffic, intruding into construction zones—all this "unexpected behavior" from Waymo's self-driving vehicles may be violating traffic laws, the US National Highway Traffic Safety Administration (NHTSA) said Monday.

To better understand Waymo's potential safety risks, NHTSA's Office of Defects Investigation (ODI) is now looking into 22 incident reports involving cars equipped with Waymo’s fifth-generation automated driving system. Seventeen incidents involved collisions, but none involved injuries.

Some of the reports came directly from Waymo, while others "were identified based on publicly available reports," NHTSA said. The reports document single-party crashes into "stationary and semi-stationary objects such as gates and chains" as well as instances in which Waymo cars "appeared to disobey traffic safety control devices."

Read 17 remaining paragraphs | Comments

Categories: Technology

Ask Google Photos to help make sense of your gallery

Engadget - Tue, 05/14/2024 - 12:08

Google is inserting more of its Gemini AI into many of its product and the next target in its sights is Photos. At its I/O developer conference today, the company's CEO Sundar Pichai announced a feature called Ask Photos, which is designed to help you find specific images in your gallery by talking to Gemini. 

Ask Photos will show up as a new tab at the bottom of your Google Photos app. It'll start rolling out to One subscribers first, starting in US English over the upcoming months. When you tap over to that panel, you'll see the Gemini star icon and a welcome message above a bar that prompts you to "search or ask about Photos."

According to Google, you can ask things like "show me the best photo from each national park I've visited," which not only draws upon GPS information but also requires the AI to exercise some judgement in determining what is "best." The company's VP for Photos Shimrit Ben-Yair told Engadget that you'll be able to provide feedback to the AI and let it know which pictures you preferred instead. "Learning is key," Ben-Yair said.

You can also ask Photos to find your top photos from a recent vacation and generate a caption to describe them so you can more quickly share them to social media. Again, if you didn't like what Gemini suggested, you can also make tweaks later on.

For now, you'll have to type your query to Ask Photos — voice input isn't yet supported. And as the feature rolls out, those who opt in to use it will see their existing search feature get "upgraded" to Ask. However, Google said that "key search functionality, like quick access to your face groups or the map view, won't be lost."

The company explained that there are three parts to the Ask Photos process: "Understanding your question," "crafting a response" and "ensuring safety and remembering corrections." Though safety is only mentioned in the final stage, it should be baked in the entire time. The company acknowledged that "the information in your photos can be deeply personal, and we take the responsibility of protecting it very seriously."

To that end, queries are not stored anywhere, though they are processed in the cloud (not on device). People will not review conversations or personal data in Ask Photos, except "in rare cases to address abuse or harm." Google also said it doesn't train "any generative AI product outside of Google Photos on this personal data, including other Gemini models and products."

Your media continues to be protected by the same security and privacy measures that cover your use of Google Photos. That's a good thing, since one of the potentially more helpful ways to use Ask Photos might be to get information like passport or license expiry dates from pictures you might have snapped years ago. It uses Gemini's multimodal capabilities to read text in images to find answers, too.

Of course, AI isn't new in Google Photos. You've always been able to search the app for things like "credit card" or a specific friend, using the company's facial and object recognition algorithms. But Gemini AI brings generative processing so Photos can do a lot more than just deliver pictures with certain people or items in them.

Other applications include getting Photos to tell you what themes you might have used for the last few birthday parties you threw for your partner or child. Gemini AI is at work here to study your pictures and figure out what themes you already adopted.

There are a lot of promising use cases for Ask Photos, which is an experimental feature at the moment and that is "starting to roll out soon." Like other Photos tools, it might begin as a premium feature for One subscribers and Pixel owners before trickling down to all who use the free app. There's no official word yet on when or whether that might happen, though.

Catch up on all the news from Google I/O 2024 right here!

This article originally appeared on Engadget at https://www.engadget.com/ask-google-photos-to-get-help-making-sense-of-your-gallery-170734062.html?src=rss
Categories: Technology

Castlevania is coming to Dead by Daylight later this year

Engadget - Tue, 05/14/2024 - 11:45

Dead by Daylight fans can check out the Dungeons and Dragons chapter starting today, but Behaviour Interactive teased another high-profile crossover during an anniversary showcase on Tuesday. A Castlevania chapter is on the way to DbD. There aren't any details yet about what that will include, but you just might get to play as Alucard or Simon Belmont in the fog. Behaviour plans to divulge more info about the Castlevania tie-up, which will arrive later this year, in August.

As for the Dungeons and Dragons chapter, which brings a dark fantasy element to DbD for the first time, Behaviour spilled the beans on that during the stream. PC players will be able to try out the chapter on the public test build today before it goes live for everyone on June 3.

Behaviour Interactive

The new killer is Vecna (the D&D version rather than the Stranger Things baddie) and stalwart video game actor and Critical Role mastermind Matt Mercer is voicing the character. The latest survivor is actually two identities in one. It's a bard character that you can opt to play as an elf female or human male, bringing a slight element of D&D-style character creation to DbD.

The chapter will also include a new map, which (surprise, surprise) is a dungeon. Whenever you're up against Vecna, you'll be able to find treasure chests which will trigger a roll of a 20-sided die when opened. Rolling a one will net you a nasty surprise while getting a 20 grants you a powerful magical item. Roll any number in between and you'll get a helpful item.

Speaking of maps, there will be larger ones to check out in an upcoming new mode. DbD has long pitted four survivors against one killer. A pair of killers will soon be able to team up and hunt eight survivors. They'll be able to take advantage of team powers too.

There will be a lot of changes for this 2 vs. 8 mode, which will be around for a limited time at first. Perks will be jettisoned in favor of a class system, and there won't be any hooks. Downed survivors will instead go straight to a cage. If a survivor is caged three times, they're out of the game. Behaviour sees this as more of a party mode as opposed to the competitive nature of 1 vs. 4. The 2 vs. 8 mode is slated to arrive later this summer, and you can expect to find out more about it in July.

Behaviour also had some news about several DbD spinoff games that are in the works. The Casting of Frank Stone is a single-player, narrative-focused game set in the DbD universe and developer Supermassive has released the first gameplay trailer.

The spinoff tells the story of a group of young people who venture into a condemned steel mill in 1980 while attempting to film their own horror movie. There, they discover evidence of crimes committed by serial killer Frank Stone.

The gameplay sounds very familiar for those who have experience of previous Supermassive games like Until Dawn and The Quarry. The direction of the story will shift based on your narrative decisions and how you handle environmental puzzles and quick-time events. The Casting of Frank Stone, which is said to be about five to seven hours long, is slated to arrive later this year.

An untitled co-op shooter spinoff from Midwinter Entertainment is still in early development, but it now has a codename: Project T. It'll be a third-person game and unlike the survivors in DbD, you'll actually be able to fight back against enemies using guns. Fans who want to find out more can sign up for an insider program, which will include updates, closed playtests and the chance to provide feedback.

That's not all though, as Behaviour announced yet another DbD spinoff. What the Fog is a two-person co-op roguelike that it developed in-house. The premise is that DbD survivors Claudette and Dwight are sucked into a cursed board game, Jumanji-style. The game is mainly played in third-person, but if you die you'll move into a bird's-eye support mode, where you can help your teammate survive. Just like in DbD, you'll need to interact with a hook to revive your ally. There's a single-player mode, while Feng Min is an unlockable character.

Behaviour Interactive

What the Fog shares some elements with DbD. You'll need to pick up tokens called blood points by roaming the map and killing enemies. These let you activate generators so you can escape a room. You'll get a buff from each generator and acquire a weapon upgrade after each round. There are bosses to take down too. What the Fog also has a more cartoony look than DbD's more realistic art style.

I've played a few rounds of the single-player mode and I'm enjoying it quite a bit so far. The metal soundtrack and monster-slaying chaos actually reminds me a bit of the Doom series. After unlocking a door, I'd suggest sticking around in the room a while longer to kill some more enemies and snag a bunch of blood points. That way, you'll be able to repair the next room's generators quickly and power up before taking on a fresh army of monsters.

What the Fog is available now on Steam. The first 2 million copies are available for free, though you'll need a Behaviour account to claim one. If you're not quick enough to snag a free copy or just feel like giving Behavior a few bucks, the game costs $5.

This article originally appeared on Engadget at https://www.engadget.com/castlevania-is-coming-to-dead-by-daylight-later-this-year-164509826.html?src=rss
Categories: Technology

Microsoft Shares Copilot Prompts in New GitHub Repo (Contributions Welcomed)

MSDN News - Tue, 05/14/2024 - 11:40
Microsoft has taken a step further in democratizing the prompt engineering field by sharing a collection of prompts for its various Copilots in a new GitHub repository. It's community-led and contributions are welcome, though you should be comfortable with your forking, branching, cloning and such.
Categories: Microsoft

Apple's M1 iPad Air drops to a new low of $399

Engadget - Tue, 05/14/2024 - 10:58

Apple’s M1 iPad Air has dropped to a new low price of $399, just as the latest model prepares to hit store shelves. This sale is from Amazon and it doesn’t include every color, though both blue and purple are covered by this steep discount. The other colors are also on sale, but the deals aren’t quite as spicy. Amazon’s sale is for the base 64GB model.

This device tops our list of the best iPads, though that’s likely to change once the new models enter the chat. No matter what happens with our list in the future, however, this is still a powerful and highly capable tablet with plenty of bells and whistles. We love the gorgeous screen, which is a serious step up from the bottom rung 10th-gen iPad. This one also gets you a more powerful chip.

We also enjoyed the form factor. It’s called the iPad Air and it shows. This is a lighter-than-average tablet that’s easy to hold and maneuver, even for long periods of time. The M1 chip is powerful enough to handle just about any app or game you throw at it and the 10.9-inch display is bright, sharp and accurate. It’s pretty much the Platonic ideal of a tablet. We even called it “the closest to being universally appealing and the best iPad for most people.” 

There’s no Face ID, which isn’t a huge deal by my estimation as tablets are harder than phones to wrangle into that sweet spot for a quick facial scan. The 64GB of available storage is also on the smaller side, making this device more of a content consumption machine than anything else. The only major downside is that the new iPad Air is a hair better in just about every aspect, though it’s also at least $200 more expensive.

Follow @EngadgetDeals on Twitter and subscribe to the Engadget Deals newsletter for the latest tech deals and buying advice.

This article originally appeared on Engadget at https://www.engadget.com/apples-m1-ipad-air-drops-to-a-new-low-of-399-155816959.html?src=rss
Categories: Technology

Senua’s Saga: Hellblade II highlights the next round of May Game Pass titles

Engadget - Tue, 05/14/2024 - 10:52

Microsoft has unveiled the next round of Xbox Game Pass arrivals. The marquee attraction is Senua’s Saga: Hellblade II, which launches as a day-one Game Pass title on May 21. But the second May batch also includes Humanity (one of the best PlayStation games from last year), hockey sim EA Sports NHL 24, magical first-person shooter Immortals of Aveum, the heartfelt classic Brothers: A Tale of Two Sons and more.

First up is Brothers: A Tale of Two Sons, the 2013 puzzle-adventure game that returns to Game Pass today after several years off the platform. Engadget’s review from yesteryear described the game as “an essential treasure” that makes up for its bland and redundant title with a unique control scheme (a thumbstick for each bro), beautiful visuals (although dated today), clever puzzles and a touching story that weaves together nicely with its action. The game is available for Game Pass members today (Tuesday) for console, PC and cloud players.

The long-awaited Senua’s Saga: Hellblade II arrives on Game Pass on May 21. Announced way back in 2019 alongside the Xbox Series X, the game sends the hero to Iceland, where she tries to find the Vikings who have invaded her hometown. Developer Ninja Theory promises more “perception puzzles led by [Senua’s] experiences of psychosis,” one of the highlights of the 2017 original. You can play it on the cloud, PC and console (Xbox Series X / S) when it lands on Game Pass next week.

Humanitytha LTD.

Humanity, the innovative puzzler that launched on PlayStation and Steam in 2023, heads to Game Pass on May 30. From Rez creator Tetsuya Mizuguchi, the game puts you in charge of a Shiba Inu guiding herds of Lemmings-like humans across an impressive 90 stages, including boss fights. It also includes a level-creator tool that lets you make your own or try others’ constructions online.

Meanwhile, Immortals of Aveum borrows first-person shooter mechanics but swaps guns for a little hocus pocus. The single-player, narrative-driven game comes from independent developer Ascendant Studios but manages to look and feel like a big-budget game. Wield otherworldly magic at your fingertips as you step into the Everwar, taking on the Rasharnian Army. The 2023 game comes to Game Pass (cloud, PC and current-gen consoles) on May 16.

Other titles arriving in the second half of May and beyond include the 2014 action RPG Lords of the Fallen (May 30 - cloud, PC, console), EA Sports NHL 24 (May 16 - Xbox Cloud Gaming via EA Play!), puzzle-adventure title Chants of Sennar (May 15 - cloud, console, PC), Moving Out 2 (May 28 - cloud, console, PC) and Firework (June 4 - PC). Several day-one launch titles coming to Game Pass soon include Galacticare (May 23 - cloud, PC, console), Hauntii (May 23 - cloud, console, PC), and Rolling Hills (June 4 - cloud, console, PC).

This article originally appeared on Engadget at https://www.engadget.com/senuas-saga-hellblade-ii-highlights-the-next-round-of-may-game-pass-titles-155216691.html?src=rss
Categories: Technology

A Tomb Raider series from Phoebe Waller-Bridge is on the way to Prime Video

Engadget - Tue, 05/14/2024 - 10:36

Amazon's Prime Video is riding a high after the success of Fallout and it has more video game-related projects lined up. The streaming service has ordered a Tomb Raider series with Phoebe Waller-Bridge of Fleabag fame set as writer and executive producer. The show was rumored to be happening as far back as January 2023 and now it's official.

“If I could tell my teenage self this was happening I think she’d explode. Tomb Raider has been a huge part of my life and I feel incredibly privileged to be bringing it to television with such passionate collaborators," Waller-Bridge said in a statement. "Lara Croft means a lot to me, as she does to many, and I can’t wait to go on this adventure. Bats ‘n all."

Few other details about the "epic, globetrotting" project have been revealed (it's not yet known who's playing Lara Croft, for one thing), though it stems from a deal between Amazon MGM Studios and Crystal Dynamics to develop shows and movies based on Tomb Raider. There's no release window for the series as yet, but Amazon says it will premiere in more than 240 countries and territories. The company's games division is also publishing Crystal Dynamics' next Lara Croft adventure, while a long-awaited animated Tomb Raider series is slated to hit Netflix this year.

Prime Video has also lined up a docuseries about EA's blockbuster Madden NFL games. EA Sports will open up its vault of rare and unreleased footage for the project. A documentary crew will follow the development of the next game in the series.

These are just some of the many announcements that Prime Video is making today as it tries to win over advertisers at its upfront event. A pop culture version of Jeopardy! is on the way to the service, which will host its first NFL Wild Card Playoff game in January. A Legally Blonde prequel series called is coming too.

Elsewhere, Prime Video renewed its hit show The Boys for a fifth season, announced a live-action Spider-Man Noir show starring Nicolas Cage and revealed the first trailer and release date for the second season of Lord of the Rings: The Rings of Power. In addition, a documentary following the last 12 days of Roger Federer’s professional tennis career is coming to Prime Video on June 20.

One final serve. FEDERER: Twelve Final Days, June 20. pic.twitter.com/yKhsTKOgMu

— Prime Video (@PrimeVideo) May 14, 2024

This article originally appeared on Engadget at https://www.engadget.com/a-tomb-raider-series-from-phoebe-waller-bridge-is-on-the-way-to-prime-video-153636273.html?src=rss
Categories: Technology

One of our favorite Roku streaming sticks is on sale for only $34

Engadget - Tue, 05/14/2024 - 09:57

The Roku Streaming Stick 4K is on sale via Amazon for just $34, which is a savings of 32 percent and one of the best prices we’ve seen all year. As the name suggests, this is a streaming stick that provides 4K visuals and ships with a voice remote that works with Siri, Alexa and Hey Google. Of course, this remote also has buttons.

The stick easily made our list of the best streaming devices, for a great many reasons. We were impressed by the sheer amount of free and live content available via Roku’s ecosystem. There’s a diverse array of free linear channels and video-on-demand (VOD) services here, with thousands of series and films to choose from. Not having to pony up for yet another subscription is always nice.

The Roku Streaming Stick 4K can also access all of those paid subscription services, from Disney+ to Peacock and beyond. The interface is uncluttered and easy to navigate, with a simple content list at the left and an app grid on the right. In addition to 4K, the device supports HDR10+ and Dolby Vision. The player even supports Apple AirPlay 2 for streaming audio and video from a tablet or phone.

If we had to nitpick, and that’s pretty much our job, the device’s What to Watch menu prioritizes the aforementioned free content over titles pulled from paid apps. It’d be nice if things were a bit even, just in case people need a little reminder to finish Sugar on Apple TV+ or Shogun on Hulu. However, it’s tough to be too miffed, as free content is where this Roku device really shines.

Follow @EngadgetDeals on Twitter and subscribe to the Engadget Deals newsletter for the latest tech deals and buying advice.

This article originally appeared on Engadget at https://www.engadget.com/one-of-our-favorite-roku-streaming-sticks-is-on-sale-for-only-34-145718364.html?src=rss
Categories: Technology

Apple, SpaceX, Microsoft return-to-office mandates drove senior talent away

Ars Technica - Tue, 05/14/2024 - 09:40

Enlarge (credit: Getty)

A study analyzing Apple, Microsoft, and SpaceX suggests that return to office (RTO) mandates can lead to a higher rate of employees, especially senior-level ones, leaving the company, often to work at competitors.

The study (PDF), published this month by University of Chicago and University of Michigan researchers and reported by The Washington Post on Sunday, says:

In this paper, we provide causal evidence that RTO mandates at three large tech companies—Microsoft, SpaceX, and Apple—had a negative effect on the tenure and seniority of their respective workforce. In particular, we find the strongest negative effects at the top of the respective distributions, implying a more pronounced exodus of relatively senior personnel.

The study looked at résumé data from People Data Labs and used "260 million résumés matched to company data." It only examined three companies, but the report's authors noted that Apple, Microsoft, and SpaceX represent 30 percent of the tech industry's revenue and over 2 percent of the technology industry's workforce. The three companies have also been influential in setting RTO standards beyond their own companies. Robert Ployhart, a professor of business administration and management at the University of South Carolina and scholar at the Academy of Management, told the Post that despite the study being limited to three companies, its conclusions are a broader reflection of the effects of RTO policies in the US.

Read 8 remaining paragraphs | Comments

Categories: Technology

Lord of the Rings: The Rings of Power trailer reveals season two release date

Engadget - Tue, 05/14/2024 - 09:25

Amazon's Lord of the Rings: The Rings of Power was both extremely successful and extremely divisive in the LOTR fan community. (Separate question, has any recent adaptation or new content in a beloved franchise not been divisive? Thoughts for another time.) Lots of people whined about how Amazon should just trash the first season and start over, but clearly that was never going to happen. What is happening is that season two of The Rings of Power has its first trailer and an August 29 release date.

I'm a pretty big Lord of the Rings fan and found season one enjoyable if not essential, but I like the looks of how things are ratcheting up here for season two. We get plenty of teases of epic battles and creepy creatures as Sauron reveals himself and begins to tighten the noose on all of Middle-earth; there are also looks at him in his "fair" form as he forges the titular Rings of Power with Celebrimbor. 

Amazon says the first three episodes will arrive on August 29, with subsequent entries following every week. Like the first season, this one will consist of eight episodes total. 

This announcement comes less than a week after Warner Bros. Discovery announced it would release a new live-action Lord of the Rings film in theaters in 2026. Tentatively titled The Hunt for Gollum, the film is directed by and will star Andy Serkis, who played Gollum in Peter Jackson's The Lord of the Rings and The Hobbit trilogies. That project will be set in the same universe that Jackson built, while Amazon's series is an entirely separate entity. There is some shared DNA, though — the first season of The Rings of Power was shot in New Zealand, like Jackson's films, and composer Howard Shore wrote the main credits theme for Amazon's show after scoring all six of the Middle-earth films. 

Oh, and Lego just dropped this incredible Barad-Dur set — it's a big week for Lord of the Rings across the board!

This article originally appeared on Engadget at https://www.engadget.com/lord-of-the-rings-the-rings-of-power-trailer-reveals-season-two-release-date-142522261.html?src=rss
Categories: Technology

Meta encourages you to disregard your seat mates and use VR headsets on a plane

Engadget - Tue, 05/14/2024 - 09:19

Your experience while taking a flight comes down to many random factors, including who sits next to you. Your seatmate has plenty of ways to bother you — you've lived them, we don't need to remind you how — but now there's a whole new option. Meta has announced a new feature called Travel Mode for its Quest 2 and 3 headsets that lets people use the devices while on a plane.

Meta claims it has "specially tuned" its algorithms, so the experience remains stable, even if you direct it out the window. Users can try Travel Mode out for themselves by visiting the experimental features section in settings. They can quickly turn the feature on and off in quick settings and should also get a prompt to activate it while flying on some airlines — though Meta doesn't specify which ones.

In general, if someone is traveling on a flight with Wi-Fi, then they can access entertainment like movies, games, and messages, but, as Meta's photo indicates, it definitely could go into the next person's space (or at least mean their seatmate is flailing their arms all around. However, Meta is also partnering with Lufthansa to offer Quest 3 headsets with custom content and entertainment on select flights' Business Class Suites. As usual, getting any dedicated space on a plane costs a lot of money. 

Interestingly, Meta decided to introduce Travel Mode on planes and not something more stable (read: moving on the ground), but it plans to expand the feature to trains and other modes of transportation.

This article originally appeared on Engadget at https://www.engadget.com/meta-encourages-you-to-disregard-your-seat-mates-and-use-vr-headsets-on-a-plane-141942620.html?src=rss
Categories: Technology

Pages

Subscribe to Superior Systems aggregator