Epic’s MetaHuman Avatar tool now has the abilty to import scans of real people, this is open for developers currently, but watch this space as I will predict it will not be long before you are able to create your own metahuman that looks like your twin.
To caveat this was found via Roadtovr.com and they quoted this
“There’s still some limitations, however. For one, hair, skin textures, and other details are not automatically generated; at this point the Mesh to MetaHuman feature is primarily focused on matching the overall topology of the head and segmenting it for realistic animations. Developers will still need to supply skin textures and do some additional work to match hair, facial hair, and eyes to the person they want to emulate.
The MetaHuman tool is still in early access and intended for developers of Unreal Engine.”
From creating more realistic avatars to now more abstract ones, this time kitted out in futuristic Adidas outfits.
I tried it a couple of times and wasn’t loving my avatar, but it might be I am highly strung.
When you enter the builder you will have to answer question for the system to create your avatar, I am not going to say its unique as I have a feeling its just created on variables entered from your selections, which others could also choose.
The values are based on how you see your personality, and questions like How Patient are you? are to you at smooth or bold?… must admit wasn’t sure what that really meant.
But go have a play here, my avatar is the image above
Enter Clinique, who have partnered with Daz 3D a 3D modelling software to make the Metaverse more inclusive, by creating a massive collection of 8,888 female and non-binary profile picture avatars to use in your social platforms.
Working with three influencers from different backgrounds to give advice on looks and tones to use with your skin types.
Tess Daly @tess.daly – who is a role model for people with disabilities, who don’t often see themselves reflected in the beauty industry.
Sheika Daley @officialsheiks – This sought-after celebrity makeup artist has always had her hands on a paint brush, thanks to her artist mum.
When you say NFT most people would think it’s all about making a profit and commerce, this isn’t always true as this next example shows.
Here NFT’s are being used to help, in this case saving the rainforest, I wanted to highlight this not just because it’s a really interesting case, but also because I’m very proud of two of my ex-creatives whom created this amazing idea. (Tiago Beltrame & Nian He).
Navigate the Nemus map, making a ‘promise to conserve’ by minting your own NFT tied to the land, each NFT drop features original artwork from an amazing artist to honor the unique flora and fauna found in the rainforest.
The World’s First Non-Fungible Territory has been officially renamed by indigenous people in Brazil in coalition with Nemus, a Web3 company that sells Non-Fungible Tokens (NFTs) to protect the Amazon Rainforest.
Around the world, indigenous peoples are stewards of the Earth, responsible for protecting 80% of the planet’s biodiversity.
With NFTs recently skyrocketing in popularity because of outsized gains and celebrity endorsements, land stewards of Brazil decided to showcase a purpose-driven utility for NFTs—to save the Amazon.
This event is captured in the short film, Non-Fungible Territory.
“I believe this [land] is an NFT, I live in an NFT.”
– Lilico, Local Community Resident
Sales of Nemus NFTs are being used to protect the Non-Fungible Territory from the clearcutting that has devastated much of the Amazon, address the $300 billion climate action funding gap to combat deforestation, create sustainable jobs and increase economic activities for the local people.
With a goal to invest a billion dollars in the region, Nemus is already having a positive impact.
They recently released over $100,000 from their treasury to fund the purchase of equipment to develop sustainable harvest methods of Brazil nuts and increase land security.
“If we are to save the Amazon, we must work with the people living there. Creating businesses in the middle of the jungle, with difficult access, no energy source, a population with limited education and qualifications is a huge challenge. But it can be done, and we have the experience. It is a lot of hard work and boots on the ground, but we can create incredible life changing results for the local communities.”
As local communities learn more about the utility of NFTs such as Nemus’, they are embracing the technology as a means to extend their stewardship of the land.
Using Web3 to bring awareness to the Amazon’s needs as well as financial alternatives for its indigenous caretakers, they unite the world around an important cause. “Buy an NFT to save the NFT.”
We have all seen deepfakes and how real they can seem, a group have taken it a step further and the resolution and small details like month and eye movement are really impressive.
A snippet from the website
“We propose a system for the one-shot creation of high-resolution human avatars, called megapixel portraits or MegaPortraits for short. Our model is trained in two stages. Optionally, we propose an additional distillation stage for faster inference.
Our training setup is relatively standard. We sample two random frames from our dataset at each step: the source frame and the driver frame. Our model imposes the motion of the driving frame (i.e., the head pose and the facial expression) onto the appearance of the source frame to produce an output image. The main learning signal is obtained from the training episodes where the source and the driver frames come from the same video, and hence our model’s prediction is trained to match the driver frame. “
On Wednesday the 20th July the of the best-selling video game of all time Minecraft[1] has decided to not allow or non-fungible tokens (NFT’s) [2] on their very popular gaming platform.
NFT’s are the ownership of a unique digital items, normally images, videos and music, this is like a receipt in the physical world, NFT’s are recorded on a register like blockchain[3].
Minecraft for those whom don’t know is a virtual world where players can create and customise their environments, since launching the game has a massive following of users that already design skins, maps and mods for the game.
Knowing what NFT’s are and do and what is happened already in Minecraft you can understand why Minecraft was such a huge opportunity with NFT’s, selling Skins, Maps and Mods.
The maker of Minecraft (Mojang Studios) have released a statement, a snapshot below and a link to the full statement.
“NFTs, however, can create models of scarcity and exclusion that conflict with our Guidelines and the spirit of Minecraft.
To ensure that Minecraft players have a safe and inclusive experience, blockchain technologies are not permitted to be integrated inside our client and server applications, nor may Minecraft in-game content such as worlds, skins, persona items, or other mods, be utilized by blockchain technology to create a scarce digital asset.
….uses of NFTs and other blockchain technologies creates digital ownership based on scarcity and exclusion, which does not align with Minecraft values of creative inclusion and playing together. ”
Global chief creative technologist at VMLY&R COMMERCE on tinkering around making games on his lunch breaks, learning the “dark arts of traditional creative” and why digital humans could be the norm sooner than you think
Where i went into some details on how easy it is to create meta humans via unreal engine and how this technology is already being used for good with the Albert Einstein example.
In such a little amount of time I’m seeing more and more meta humans popping up, especially in social.
For example in the header picture we have one virtual influencer and one ‘real’ person… well I think!!!
Some brands are using meta humans as influencers, sounds strange right, you would think who would follow a meta human vs a real person?!
But, they do and its a lot of people as well nearly 6 million for Lu who i mention below.
Let me introduce you to a few.
Meet Lu the virtual influencer of a Brazilian retail Magazine Luiza. Lu is the ‘face’ of the brand sharing the latest technology available to consumers on her social media account @magazineluiza
Next is Lilmiquela.
@lilmiquela is a self acclaimed Influencer, fashionista, pop star and model.
What I did find really interesting about this influencer was the amazing story telling, there is a whole back story, baby photos etc.. but in the end she does find out that she is in fact a meta human or ‘Robo’ in her words. With 3m followers they have to be doing something right, like selling clothing below.
There are many more meta human and strange influencers out there, a good source for finding them is on VirtualHumans.org which lists the top 35 verified insta influencers.
Lets end this little update post on meta humans with a little bit of weird..
No it isn’t a meta human, but its proof of the potential of virtual influencers, be they meta humans or in this case a sausage.. yes that is a sausage advertising for an insurance company in the example below. By no means am I mocking the sausage, I am honestly impressed, and I don’t want hate mail from a sausage.. you will never be able to explain that one to anybody.
Projection mapping has been with us for a decade or more now with light shows on buildings and lighting the sky up with brands logos and objects that leave the viewer mouth wide open.
The art was the main focus, yes it was beautiful, exciting and captivating…
But… How do we make something beautiful and engaging into something that has a function towards transactions for commerce
I saw the case from New Balance 10 years ago, and and it wasn’t until I saw the other cases below that are still very similar that I realized even 10 years later we are using this technology mainly for show vs function
New Balance Case
A Commerce Projection solution could be…
So, imagine this functionality, instead of an art show projected on the shoes, you take artist designs or even better, community designed shoes that are projected in-store, the customer can then pause the projection and be given the ability to be purchase unqiue and limited edition designs.
Why? I hear you ask.
This would;
Give the store more designs to share and open the eco-system up to a open network where communities can help build the brand and sales.
Give the community the ability to design and sell, and in return giving them rewards (NFT’s / bitcoins / Coupons / Discounts / FAME)
Give consumers more options that can also be customized, like NikeID but physically vs digital only.
This simple example shows how we can turn a technology that has all been about lean back experience into a lean forward experience.
Making it more engaging for consumers and more profitable for the brands
And lowering the environmental impact of mass production and waste. [Thanks Felix for the build]
This can all be done with Creative Technology.
Creative Technology Definition;
“Creative technology has been defined as “the blending of knowledge across multiple disciplines to create new experiences or products” that meet end user and organizational needs.
A more specific conceptualization describes it as the combination of information, holographic systems, sensors, audio technologies, image, and video technologies, among others with
artistic practices and methods.
The central characteristic is identified as an ability to do things better.
Creative technology is also seen as the intersection of new technology with creative initiatives such as fashion, art, advertising, media and entertainment.
Shamefully, ‘Selves’ isn’t a term I was familiar with, until wrote an article on MetaHumans, and was introduced to a fellow at the MIT initiative on the Digital Economy, called Michael Schrage. Michael has been doing collaborative research on ‘Selves’, and all the possibilities and opportunities they could bring us in the future.
Firstly, what are Selves?
Selves, in the simplest terms, are digital duplicates and doppelgangers of ‘Ones’. They’re analogous to the ‘digital twins’ you hear about for the ‘internet of things’. Ideally digital selves would amplify all of your best human aspects and attributes, to quote Michael. He also believes they should be designed to mitigate your lesser qualities. He wants a ‘digital selve’ nudging him to stop interrupting, based on our interview, or so I understand… 🙂
As he puts it, “These ‘multiple selves’, will yield more productive employees, more empathetic companions, and more creative thinkers — not merely automated attendants.” Michael is referring to current agent-based intelligent systems, such as Siri and Alexa, that help you with chores, calendars, lists and find information for you at speed. These current systems are responding with automated responses based on learning.
In short, Selves could be a disruptive future and evolution of our current automated attendants, with the advancements in AI and machine learning.
I asked Michael a few questions based on the above and his responses are below;
Q: Looking at Selves through a commerce lens, would a Selve, embedded into a digital mirror, know how to respond to a shopper query like “ I’m not sure which dress or suit looks best on me, what do you think?”
A: That’s (almost) exactly the right question, you want an ‘affective’ self to be able to advise ‘you’
Forgive the intrinsic gender orientation for this example – which dress is ’sexier,’ more ‘professional’ more ’stylish’ etc. based on the data-driven/recommender systems-enabled ‘preferences’ and ‘attributes’ that have been algorithmically inferred So a lululemon-like or AR ‘mirror’ should be able to ‘model’ dresses that (literally) reflect one’s ’selves preference’ – projecting ‘power and confidence’.
My design ethos emphasizes ‘agency’ and ‘choice’ – not the commanding approach.
Multiple selves are about empowering people to get greater ROI (return on introspection) on how they want to be seen and how they want to (in your use case) see themselves.
Q: And another use case, could be for a helpline for addiction to talk someone down from self-harming. How would a Selves respond differently?
A: wow, again – great question, there are now a ton of ‘mindfulness’ apps and other ‘mental health’-oriented ‘chatbots’ that could, indeed, be used to create a different/healthier dialogue/conversation with one’s self. But now we’re venturing into areas where I think more serious research needs to be done: i.e., would a ‘mental/emotional health’ self give better results than t third-party/therapeutic ‘bot’ from a health care service? these are non-trivial issues with enormous global repercussions and more research is needed.
Let’s look to the NHS, America’s National Institute of mental health and other research agencies to sponsor ‘selves-oriented’ mental health diagnostics and treatment.
Q: How effective are Selves today, in responding to emotional responses vs rational / functional?
A: Well, if one reads Hume, he persuasively argues, that ‘reason is a slave to passions’ – this research domain the entangling of ‘rational’ and ‘affective’ selves is the hottest in neurology, neurophysiology, cognitive psychology and social psychology, which is a long-winded way of saying, the science here not only isn’t settled, it’s barely begun. These are exciting times for how one imagines one’s future selves.
Q: Are Selves actually a reality today? If not, how far off are we from having AI that will deliver this?
A: I like to say/observe that most of the pieces are already here. They just haven’t been put together in a ’selves-oriented’ way. I believe the focus has been misplaced: we’re optimizing software ‘agents’ at the expense of cultivating effective/affective ’selves portfolios’, I think the future – 2025/6 – will increasingly be about multiple digital selves managing multiple software agents. Today, top decile productive manage multiple devices with multiple apps – some automation-oriented; others augmentation oriented; tomorrow, the most productive managers will manage teams of multiple selves, no, I’m not kidding.
The outstanding open question is whether those selves will be accessed via augmented and virtual reality interfaces versus a ‘new and improved’ mobile ‘phone’.
Conclusion
As you can imagine, the use cases for digital Selves would be extensive; interacting with a digital version of you, to aid in commerce situations, from buying groceries to even talking through the rational of buying your next car.
Selves remind me of a highly advanced version of this Gatebox (below), that I saw at CES, which launched in Japan. But as I said, Selves, if they become reality, would deliver way more benefits than a hologram companion, which I found a bit creepy to be honest.
It’s still not clear how Selves will come to life, and I would assume it could take any form; MetaHuman, Voice, hologram or abstract, it’s the content they will deliver that’s most important.
As Michael said, we are not there yet, especially with the more emotional decisions, such as the help-centre example. But, with better AI and machine learning, it will not be long before we will see commerce solutions everywhere.
I’m personally looking forward to meeting my digital Selves; I hope we like each other!
This feels like a headline or title to a science fiction movie, one that would bring panic to the human race, taking your imagination to iconic films Matrix or iRobot where AI or robots run the world.
The intent of the headline was to make this short article thought provoking and start a debate, not to scare you into a panic frenzy shopping trip to stock up on a years worth of toilet paper.
Let’s start with what are digital humans, here I found a quote form deloitte
Digital Humans are AI powered human-like virtual beings that offer the best of both: AI and Human conversation. They can easily connect to any digital brain to share knowledge (i.e. chatbot and NLP) Interact using verbal and non-verbal cues – tone of voice and facial expressions.
The best way to create a Digital human or MetaHuman is via the Unreal engine using the MetaHuman Creator , which is a Cloud-based app to create high-fidelity digital humans in minutes.
You can sign up for early access to the generator, and it’s true I was able to create my own MetaHuman in a matter of minutes, they give you a starting structure and from there you can blend in up to three people, change everything from eyes to outfit. I would highly recommend spending 15 mins playing with the app.
An example of the output is below.
You will also have to connect speech and AI together to make the MetaHuman actually interact with you, but this isn’t a tutorial on how to do it, more why you should explore MetaHumans
So, what are the uses for Digital humans and will they really take over the world? the best way to answer that or give my opinion is to show an example that UNEEQ created to share the wisdom of Albert Einstein, enabling school children to ask Mr Einstein all about his life and work.
Using the Albert Einstein example you can see how such a thing can help, other use-cases are help centers, we have all experienced calling our internet supplier for technical support and them reading from a list of actions, such as “have you turned it on and off at the wall socket and waited 10 seconds before plugging it back in”. The trouble is most call centers are not open 24hrs a day, and we live our lives on 24hrs day, so having a MetaHuman to answer and solve 80 percent of calls out of hours will only add to the customer satisfaction, win win situation.
In the future when the AI is more advanced the MetaHuman could also be used on helplines for more sensitive subjects like addiction, but we are still a long way off from here, its one thing looking human, its another having emotional responses.
Because of the realism like below you can also film Brand Commercials (TVC’s) and even films in the future, in fact lots of films have already be created using MetaHumans, MetaHumans will never take the main roles in my view… (I know I’m wrong with the film Avatar), but I can’t see a MetaHuman taking the place of a Robert DeNiro and delivering the role as well.
Coming back to the headline, I don’t believe MetaHuman / Digital Humans are taking over the world, but what they doing is supporting when real humans can’t interact, making a connection, answering questions and giving a more emotive experience than just voice or text interactions.
Therefore my view is – NO MetaHumans are not taking over, but YES they are the future.
This was just a quick intro into what MetaHumans are and what they can be used for, below there are a couple of Links for further reading.
Lately I have taken a massive interest into Digital Humans and all they can offer using AI, machine learning and buckets of data.
This example is spot on using a figure that the world know so well and bringing him back to life for classrooms to ask him questions on all of his amazing work and his life. He responds in a casual way that doesn’t feel too forced enabling you to actually start a conversation, i would recommend using your mic vs typing to give the conversations a natural feel.
Overall I was very impressed when i have my little chat with one of my heros.
Audio content production company Aflorithmic and digital humans company UneeQ teamed up to create a digital version of the famous genius, Albert Einstein. – See the video below and to chat to him click this LINK
Mean Gene Hacks has created a motion device that connected with his driving sim gives him real time motion feedback, he does this with a devise that manipulates your nerves making you move is if you were in a real car turning a corner, or in the case of the example in the video, falling off your chair when you crash the car.
The cost to make this motion sim was 50 dollars – he made a video on how here
It turns out a process called galvanic vestibular stimulation—also known as GVS—can be used to alter a human’s sense of balance by electrically stimulating a nerve in the ear using electrodes.
With the biggest update on Google Earth since 2017 is the introduction of Timelapse
“With Timelapse in Google Earth, 24 million satellite photos from the past 37 years have been compiled into an interactive 4D experience. Now anyone can watch time unfold and witness nearly four decades of planetary change. ”
Just to see how much the planet has changed in such a short time is truly fascinating, but also quite scary to see the possible impact environmental change has also had on the planet.
When I saw this thermal camera add on to your mobile my first question was: why, why is this even a thing unless you’re in law enforcement.
Well it turns out thermal imaging is actually a big thing and being used already in a number of different places, of course law enforcement, but also:
1. Disease control at airports where you can track peoples temperature to detect fevers etc, very important in Covid19 times.
2. Road safety – The BMW 7 Series incorporates an infrared camera to see people or animals beyond the driver’s direct line of sight.
3. Search and rescue – to see through smoke and find people.
4. Pest control – finding unwanted visitors in your home.
5. Health care – Circulation Problems. Thermal scanners can help detect the presence of deep vein thromboses and other circulatory disorders.
6. Home repairs – Like electrical, gas and water to find blockages and leaks.
I found this website that listed 65 uses, even down to Barbecuing to find the optimal temperature to cook.
I saw this on Linkedin, it’s not a new concept, but its becoming more popular and look at all the usecases for virtual talks / conferences. I talked about textile waste in my last post, this could save $$$ on travel and population.
Back to this tech, I hope it does become more mainstream, not just because it’s cool, but its can be very functional as well.
Finesse, a start up is using AI to take the guess work out of all the fashion waste. It’s estimated that companies burn 13 Million Tons of Textiles per year because they don’t have the data to know how much of a certain product to make.
Finesse want to change that by gathering data on social trends, not the catwalk, but those of social influences, analyzing shares, comments likes and following the trend on certain posts to see where and when and who pushed the trend forward.
with this data they can measure who would actually purchase.
Founder and CEO Ramin Ahmari said via Techcrunch
“In the simplest terms, you can think of what we do as seeing when Kylie posts a picture on Instagram and people go crazy about it … and then you see that happen not just on Kylie’s post but across Instagram, TikTok, Google Trends,” he said. “We predict the establishing of a trend before it goes super viral.”
Amazon launched a new service in the US a few days ago where your Alexa devices learns from your habits and routines and starts acting for you.
The best example to explain this would be; you have gone to bed and you forgot to turn off some lights, Alexa will then either ask you if you want the lights off or just automatically turn them off, and all this is based on the habits and data Alexa collected based on your previous interactions.
Scary?! it shouldn’t be, you should have been aware that Amazon has been collecting all this data since you started using the products. there are setting to turn all of this off, but I honestly am thankful for all these added features where they put your data to use for useful solution.
I did a panel talk this month chaired by Pat Murphy of MCA and I was asked about eCommerce and how I felt it will evolve and the production behind development.
I used a few examples then this example popped into my head from Kanye West.
Where Kanye wanted to bring Art and Entertainment together with Commerce. The site is a beautiful and engaging, but usability is the main component I was missing after watching the video below.
I believe there is a space for entertainment and commerce to live beautifully together, but we have to make sure the balance is right from entertainment and functionality.
Sadly I have not been able to launch the website and keep getting an error message, which might be because I am located in the UK.
Who?
A competition among accredited, tax-exempt colleges and universities (including foreign institutions of higher education that are organized and operated in a manner consistent with requirements for exemption from federal income tax under the laws of the United States) to create software that enables automated-capable racecars to best compete and aspire to finish first in a head-to-head race on the Indianapolis Motor Speedway’s (IMS) famed oval.
Prize?
1 Million dollars to the first team to cross the finish line in 25 minutes or less in a head-to-head, 20-lap race of automated Dallara IL-15 racecars around the Indianapolis Motor Speedway oval