Connect with us


MeowTalk: Alexa developer’s app to translate cat’s miaow



An app that aims to translate your cat’s miaow has been developed by a former Amazon Alexa engineer.

MeowTalk records the sound and then attempts to identify the meaning.

The cat’s owner also helps to label the translation, creating a database for the AI software to learn from.

Currently, there are only 13 phrases in the app’s vocabulary including: “Feed me!”, “I’m angry!” and “Leave me alone!”

Research suggests that, unlike their human servants, cats do not share a language.

Each cat’s miaow is unique and tailored to its owner, with some more vocal than others.

So, instead of a generic database for cat sounds, the app’s translation differs with each individual profile.

By recording and labelling sounds, the artificial intelligence and machine-learning software can better understand each individual cat’s voice – the more it’s used, the more accurate it can become.

The eventual aim is to develop a smart-collar, Javier Sanchez, group technical program manager at app developer Akvelon, said in a webinar on its website.

The technology would then translate your cat’s miaow instantly, and a human voice would speak through the collar.

“I think this is especially important now because, with all the social distancing that’s happening, you have people that are confined at home with … a significant other – this feline,” Mr Sanchez added.

“This will enable them to communicate with their cat, or at least understand their cat’s intent, and build a very important connection.”

The app is available free on both Google Play Store and Apple’s App Store.

As it’s still in its early stages of development, there are mixed reviews, with several users complaining of errors in the app.

“I’m getting quite irritated,” one review said. “I just downloaded it and haven’t even been able to use it because it just keeps telling me there is a wifi/connection error.”

“I was getting the translation ‘I’m in love!’ 90% of the time,” another user said.

“While it’s nice to think that my cats love me so much, I’d caught one of my cats hissing and growling during play – and it said she was in love then too.”

But others were positive, and the app has an average rating of 4.3 on the Google Play Store.

“For now, if you don’t take it too seriously, it’s a really fun app,” one review said. “And, who knows, maybe in time, it will really learn my cat’s true meow in all instances. It surely looks promising.”

“Really cool concept and I’ve enjoyed it as my cats never stop talking,” another review added.

However, users have also expressed concern about privacy on the app over how the data from the recordings is stored and used.

In its privacy policy, the app says it is in a “development phase” and advises anyone “concerned about data retention” to uninstall the app until it is fully compliant with the EU’s GDPR privacy law

“Most cat vocalisations are actually to communicate with humans, as most owners will respond to them,” Juliette Jones, cat behaviour specialist at Wood Green, The Animals Charity, said.

As the app relies on the owner labelling translations, there is room for miscommunication, she added.

“There may be some inaccuracies which could give owners the wrong impression about what their cats are feeling.

“This could be detrimental to the cat, the owner and their relationship – for instance, if a cat is purring it doesn’t necessarily mean they are happy and restful. A purr can also be seeking affection or indicating discomfort. In its current form, the app should only be used for entertainment.”

“We will probably never be able to covert a cat’s miaow into human words,” Anita Kelsey, cat behaviourist and author of ‘Let’s Talk About Cats’, said. “All we can do is have fun thinking about what they might be saying from our own human perspective.

“The app seems like fun and there’s no harm in having fun with your cat.”

Continue Reading


Balenciaga to unveil new collection in video game




Luxury fashion house Balenciaga is to unveil its autumn/winter 2021 collection in an original video game.

The firm said players would navigate through a virtual realm, completing tasks and meeting characters clad in the label’s new designs.

Afterworld: The Age of Tomorrow, which will launch on December 6, will be playable in web browsers.

The Covid-19 pandemic has sparked a shift in the way global fashion brands showcase their lines.

Burberry unveiled its latest collection in September via Twitch, and Gucci enlisted singer Harry Styles to help showcase its latest collection line in a seven-episode mini-series.

In April, clothing brand 100 Thieves made its entire streetwear collection available in Nintendo Switch game Animal Crossing: New Horizons.

And last month Balenciaga launched its summer 2021 collection on YouTube.

Amazon has moved to attract luxury fashion shoppers to its marketplace following the shift in consumer habits this year, although a survey of 2,000 UK shoppers earlier this month found that only 32% would be interested in buying fashion via its Luxury Stores online outlet, according to website Just Style.

Marketing manager Anthony Blakemore said that online fatigue due to the pandemic had forced brands to develop new ways to excite consumers.

“We’re seeing brands roll out innovative ways of showcasing and selling their products across the digital landscape,” he told the BBC.

“The gamification of Balenciaga’s new collection is an excellent example of how innovative digital marketing methods can be used to not only sell clothes, but create an immersive and enjoyable digital experience.”

Michael Branney, managing director at clothing retailer Oh Polly, warned that brands should be prepared to maintain this new level of innovation.

“Once the Covid-19 pandemic is over, customers are still going to expect the same level of interaction from the brands they have spent their 2020 with,” he told the BBC.

“Customers are now engaging in longer pieces of content, like a YouTube video, rather than a quick snapshot on Instagram. Perhaps a video game, or interactive content, is the next logical step for brands.”

Read from source

Continue Reading


Face recognition isn’t just for humans — it’s learning to identify bears and cows, too Rachel Metz




It’s hard for the average person to tell Dani, Lenore, and Bella apart: They all sport fashionably fuzzy brown coats and enjoy a lot of the same activities, like playing in icy-cold water and, occasionally, ripping apart a freshly caught fish.

Melanie Clapham is not the average person. As a bear biologist, she has spent over a decade studying these grizzly bears, who live in Knight Inlet in British Columbia, Canada, and developed a sense for who is who by paying attention to little things that make them different.
“I use individual characteristics — say, one bear has a nick in its ear or a scar on the nose,” she said.
But Clapham knows most people don’t have her eye for detail, and the bears’ appearances change dramatically over the course of a year — such as when they get winter coats and fatten up before denning — which makes it even harder to distinguish between, say, Toffee and Blonde Teddy.

It’s hard for the average person to tell Dani, Lenore, and Bella apart: They all sport fashionably fuzzy brown coats and enjoy a lot of the same activities, like playing in icy-cold water and, occasionally, ripping apart a freshly caught fish.

Melanie Clapham is not the average person. As a bear biologist, she has spent over a decade studying these grizzly bears, who live in Knight Inlet in British Columbia, Canada, and developed a sense for who is who by paying attention to little things that make them different.
“I use individual characteristics — say, one bear has a nick in its ear or a scar on the nose,” she said.
But Clapham knows most people don’t have her eye for detail, and the bears’ appearances change dramatically over the course of a year — such as when they get winter coats and fatten up before denning — which makes it even harder to distinguish between, say, Toffee and Blonde Teddy.

Tracking individual bears is important, she explained, because it can help with research and conservation of the species; knowing which bear is which could even help with problems like figuring out if a certain grizzly is getting into garbage cans or attacking a farmer’s livestock. Several years ago Clapham began wondering whether a technology typically used to identify humans might be able to help: facial recognition software, which compares measurements between different facial features in one image to those in another.
Clapham teamed up with two Silicon Valley-based tech workers and together they created BearID, which uses facial-recognition software to monitor grizzly bears. So far, the project has used AI to recognize 132 of the animals individually.

While facial-recognition technology known as a tool for identifying humans — and a controversial one at that, due to well-known issues regarding privacy, accuracy, and bias — BearID is one of several efforts to adapt it for animals in the wild and on farms. Proponents of the technology, such as Clapham, say it’s a cheaper, longer-lasting, less invasive (and with animals such as bears, less dangerous) way to track animals than, say, attaching a collar or piercing an ear to attach an RFID tag.
Building a grizzly data set
For Clapham, who’s also a postdoctoral fellow at the Unversity of Victoria, this interest in combining bears and AI has been in the works for years. In 2017 she joined, which connects conservationists with those in the tech community. There, she quickly met Ed Miller and Mary Nguyen — two tech workers in San Jose, California (who happen to be married) who were interested in machine learning and watching grizzlies via live webcam at another popular bear hangout, Brooks Falls in Alaska’s Katmai National Park.
The trio has since gathered thousands of bear photos from Knight Inlet and Brooks River to create data sets, and adapted existing artificial intelligence software called Dog Hipsterizer (used, naturally, to add silly mustaches and hats to pictures of dogs) to spot bear faces in their images. Once the faces are detected, they can also use AI to recognize specific bears.
“It does way better than we do,” said Miller.
So far, BearID has collected 4,674 images of grizzly bears; 80% of the images were used for training the facial-recognition system, Clapham said, and the remaining 20% for testing it. According to recently-published research from her and her collaborators, the system is 84% accurate. The bear you’re trying to recognize must already be in the group’s relatively small dataset, though.

Facial recognition on the ranch
While BearID is putting names to faces in the wild, Joe Hoagland is trying to do likewise on cattle ranches. Hoagland, a cattle rancher in Leavenworth, Kansas, is building an app called CattleTracs that he said will enable anyone to snap pictures of cattle that will be stored along with GPS coordinates and the date of the photo in an online database. Subsequent photos of the same animal will be able to matched to the earlier photographs, helping track them over time.
Beef cattle, he explained, pass through many different people and places during their lives, from producers to pasture operations to feed lots and then to meat packing plants. There isn’t much tracking between them, which makes it hard to investigate problems like animal-based diseases that can devastate livestock and may harm people, too. Hoagland expects the app to be available by the end of the year.
“Being able to trace that diseased animal, find its source, quarantine it, do contact tracing — all the things we’re talking about with coronavirus are things we can do with animals, too,” he said.

Hoagland approached KC Olson, a professor at Kansas State University, who brought together a group of specialists at the school in areas like veterinary science and computer science in order to gather pictures of cattle to create a database for training and testing an AI system. They built a proof-of-concept system in March that included more than 135,000 images of 1,000 young beef cattle; Olson said it was 94% accurate at identifying animals, whether or not it had seen them before.
He said that’s far better than what he’s seen with RFID tags and readers, which can work poorly when cattle are densely packed.
“This is a major leap forward in accuracy,” he said.
Gold for poachers
Although facial recognition for animals isn’t fraught with the same privacy, bias, and surveillance issues as it is for people, there are unique issues to consider.
For example, while surveillance technology could help protect animals, it may also be used against them. Tanya Berger-Wolf, co-founder and director of, which is an AI platform for wildlife research projects, stressed the importance of controlling access to animal data to those who have been vetted.
“What’s great for scientists and conservation managers is also gold for poachers of wildlife,” she said.
That’s because a poacher could use images of animals, coupled with data such as GPS coordinates that may be attached to the photos, to find them.
There’s also the difficulty of collecting a large number of images of individual animals — from multiple viewpoints, in different lighting conditions, without obstructions like plants, taken repeatedly over time — to train AI networks.
Anil Jain, a computer science professor at Michigan State University, knows this better than most: He and his colleagues studied how facial-recognition software could be used to identify lemurs, golden monkeys, and chimpanzees — the hope was to help track endangered animals and halt animal trafficking. They released an Android smartphone app in 2018 called PrimID that let users compare their own primate photos to ones in their database.

Jain, who is no longer working on that project, said gathering sufficient animal photos was particularly tricky — especially with lemurs, who may bunch together in a tree. Facial-recognition networks for humans, he noted, may be trained with millions of photos of hundreds of thousands of people; BearID has relied upon just a fraction as many so far, as did Jain’s research.
Clapham said she has more images of some bears than others, so her team is trying to get more of the bears that are less represented in the dataset. The researchers also want to stfart training their AI system on footage from camera traps, which are cameras equipped with a sensor and lights and placed in the wilderness where animals may wander by and trigger video recordings. They’re considering how BearID could go beyond bears to other animals as well.
“Really any species we can get good training data for we should potentially be able to develop this type of facial recognition for as well,” Clapham said.

Read from source:

Continue Reading


Trump Twitter ‘hack’: Dutch police question researcher




Dutch police have questioned a security researcher who said he successfully logged into the US president’s Twitter account by guessing his password.

Last month, well-known cyber investigator Victor Gevers said he had gained access to Donald Trump’s Twitter account with the password ‘MAGA2020!’.

The White House denied it had happened and Twitter said it had no evidence of a hack.

But Mr Gevers has now revealed more information to back up his claims.

As part of the police interrogation, Mr Gevers revealed for the first time that he had substantially more evidence of the “hack” than he had previously released.

He did not reveal exactly what information he had, but by logging in to somebody’s Twitter account someone would in theory be able to:

see and send private messages
see tweets that the user had privately bookmarked
access information such as how many people the account holder had blocked
They would even be able to download an archive of all the user’s data, including photos and messages.

A spokesman for the Dutch Public Prosecution Service confirmed to De Volkskrant newspaper: “We are currently investigating whether something criminal has happened.”

The spokesman said their inquiry was an “independent Dutch investigation” and not based on a US request for legal assistance.

The police told the BBC that Mr Gevers had been questioned as a witness by the High Tech Crime Team and was not a suspect yet.
Police must first prove that the hack happened. If prosecutors consider Mr Gevers’ actions to be illegal and outside the realm of cyber-security research, he could face up to four years in prison.

Mr Gevers told reporters of his hack on 22 October. Dutch news outlet Vrij Nederland first reported the story.

Donald Trump’s Twitter account has about 89 million followers.

Mr Gevers says he was doing a semi-regular sweep of the Twitter accounts of high-profile US election candidates on 16 October when he guessed President Trump’s password.

He did not post any tweets or change any settings, but said he took screenshots of some parts of the president’s account.

He said he had spent days trying to contact the Trump campaign to warn them about their security, which was lacking extra safeguards like two-factor authentication, before going to the press.

Two-factor authentication is a widely-used security system that links a phone app or number to an account, to add an extra step to the process of logging in.

The US president’s account is now secure.

At the time, Twitter said: “We’ve seen no evidence to corroborate this claim. We proactively implemented account security measures for a designated group of high-profile, election-related Twitter accounts in the United States, including federal branches of government.”

Twitter refused to answer any further questions about the hack, including whether or not the extra security measures were permanently enforced or if the company even has access to the president’s account activity.

Mr Gevers’ story has been met with scepticism by some in the information security world as his screenshots could have been faked.

However, he claims to have a lot more data. He hopes he will not have to disclose it to prosecutors but says he is prepared to if necessary.

He said: “I have evidence that was not included in the responsible disclosure to the Trump team because it did not add anything in alerting the victim of the risk.

“I have shown some of it to a select group of journalists. Police asked me if I was willing to show it and I said no. Only if there is an indication of wrongdoing will the archived material be unlocked.”

The BBC has seen some evidence but has not been able to verify whether all the additional material is genuine.

But Mr Gevers says he is standing by his account of events and hopes that his actions are ruled to have been a normal part of his job as an ethical hacker.

“There should not be a reason for the Dutch National Police, especially the team at the High Tech Crime Unit, to doubt my statement. They know me, they know my work for more than 22 years with the Dutch Institute for Vulnerability Disclosure.

“I did not ‘hack’ Trump’s account, I did not bypass any security system as there was no adequate security in place. I just guessed the password and then tried to warn his team about the risks and how to solve them.”

Earlier this year, Mr Gevers also claimed to have successfully logged into Mr Trump’s Twitter account in 2016.

In that login he and other security researchers used a password linked to another of Donald Trump’s social network accounts that was discovered in a previous data breach.

In that instance Mr Gevers claims the password was another famous catchphrase from the reality TV star and politician: “yourefired”.

Read from source:

Continue Reading


Copyright © 2020 ,