Category: Innovation

Advancement in Digital Humans

Lastly, this week I found this company Pantheonlab.ai that are creating digital humans. If you have read my other articles here:

Digital Humans taking over

Digtial Humans taking over Cont..

You will see I am a little obsesses with digital humans and the value they can add with production and services like help desks. 

Pantheon Lab have created very realistic digital humans that can also switch gender, appearance, or race.

With the advancement of technology, it is now possible to create artificial characters that just look real, shift faces, sync lips, or clone voices.

And NO this person isn’t real in the video..

 

 

NFT’s for Good

When you say NFT most people would think it’s all about making a profit and commerce, this isn’t always true as this next example shows.

Here NFT’s are being used to help, in this case saving the rainforest, I wanted to highlight this not just because it’s a really interesting case, but also because I’m very proud of two of my ex-creatives whom created this amazing idea. (Tiago Beltrame & Nian He).

Goto the website : https://nemus.earth/

Navigate the Nemus map, making a ‘promise to conserve’ by minting your own NFT tied to the land, each NFT drop features original artwork from an amazing artist to honor the unique flora and fauna found in the rainforest.

The ‘litepaper’ on the project. ; https://docs.nemus.earth/nemus-docs/nemus-litepaper/welcome-to-nemus

 

The release:

 

The World’s First Non-Fungible Territory has been officially renamed by indigenous people in Brazil in coalition with Nemus, a Web3 company that sells Non-Fungible Tokens (NFTs) to protect the Amazon Rainforest.

Around the world, indigenous peoples are stewards of the Earth, responsible for protecting 80% of the planet’s biodiversity.

With NFTs recently skyrocketing in popularity because of outsized gains and celebrity endorsements, land stewards of Brazil decided to showcase a purpose-driven utility for NFTs—to save the Amazon.

This event is captured in the short film, Non-Fungible Territory.

“I believe this [land] is an NFT, I live in an NFT.”

– Lilico, Local Community Resident

Sales of Nemus NFTs are being used to protect the Non-Fungible Territory from the clearcutting that has devastated much of the Amazon, address the $300 billion climate action funding gap to combat deforestation, create sustainable jobs and increase economic activities for the local people.

With a goal to invest a billion dollars in the region, Nemus is already having a positive impact.

They recently released over $100,000 from their treasury to fund the purchase of equipment to develop sustainable harvest methods of Brazil nuts and increase land security.

“If we are to save the Amazon, we must work with the people living there. Creating businesses in the middle of the jungle, with difficult access, no energy source, a population with limited education and qualifications is a huge challenge. But it can be done, and we have the experience. It is a lot of hard work and boots on the ground, but we can create incredible life changing results for the local communities.”

Flavio De Meira Penna, CEO of Nemus

As local communities learn more about the utility of NFTs such as Nemus’, they are embracing the technology as a means to extend their stewardship of the land.

Using Web3 to bring awareness to the Amazon’s needs as well as financial alternatives for its indigenous caretakers, they unite the world around an important cause. “Buy an NFT to save the NFT.”

 

 

 

 

 

Live Portraits

We have all seen deepfakes and how real they can seem, a group have taken it a step further and the resolution and small details like month and eye movement are really impressive.

A snippet from the website

“We propose a system for the one-shot creation of high-resolution human avatars, called megapixel portraits or MegaPortraits for short. Our model is trained in two stages. Optionally, we propose an additional distillation stage for faster inference.

Our training setup is relatively standard. We sample two random frames from our dataset at each step: the source frame and the driver frame. Our model imposes the motion of the driving frame (i.e., the head pose and the facial expression) onto the appearance of the source frame to produce an output image. The main learning signal is obtained from the training episodes where the source and the driver frames come from the same video, and hence our model’s prediction is trained to match the driver frame. “

Website: https://samsunglabs.github.io/MegaPortraits/

 

 

 

Heard of Digital Selves!?

Shamefully, ‘Selves’ isn’t a term I was familiar with, until wrote an article on MetaHumans, and was introduced to a fellow at the MIT initiative on the Digital Economy, called Michael Schrage. Michael has been doing collaborative research on ‘Selves’, and all the possibilities and opportunities they could bring us in the future.

Firstly, what are Selves?

Selves, in the simplest terms, are digital duplicates and doppelgangers of ‘Ones’. They’re analogous to the ‘digital twins’ you hear about for the ‘internet of things’. Ideally digital selves would amplify all of your best human aspects and attributes, to quote Michael. He also believes they should be designed to mitigate your lesser qualities. He wants a ‘digital selve’ nudging him to stop interrupting, based on our interview, or so I understand… 🙂

As he puts it, “These ‘multiple selves’, will yield more productive employees, more empathetic companions, and more creative thinkers — not merely automated attendants.” Michael is referring to current agent-based intelligent systems, such as Siri and Alexa, that help you with chores, calendars, lists and find information for you at speed. These current systems are responding with automated responses based on learning.

In short, Selves could be a disruptive future and evolution of our current automated attendants, with the advancements in AI and machine learning.

I asked Michael a few questions based on the above and his responses are below;

Q: Looking at Selves through a commerce lens, would a Selve, embedded into a digital mirror, know how to respond to a shopper query like “ I’m not sure which dress or suit looks best on me, what do you think?”

A: That’s (almost) exactly the right question, you want an ‘affective’ self to be able to advise ‘you’

Forgive the intrinsic gender orientation for this example – which dress is ’sexier,’ more ‘professional’ more ’stylish’ etc. based on the data-driven/recommender systems-enabled ‘preferences’ and ‘attributes’ that have been algorithmically inferred So a lululemon-like or AR ‘mirror’ should be able to ‘model’ dresses that (literally) reflect one’s ’selves preference’ – projecting ‘power and confidence’.

 My design ethos emphasizes ‘agency’ and ‘choice’ – not the commanding approach.

Multiple selves are about empowering people to get greater ROI (return on introspection) on how they want to be seen and how they want to (in your use case) see themselves.

 

 

Q: And another use case, could be for a helpline for addiction to talk someone down from self-harming. How would a Selves respond differently?

A: wow, again – great question, there are now a ton of ‘mindfulness’ apps and other ‘mental health’-oriented ‘chatbots’ that could, indeed, be used to create a different/healthier dialogue/conversation with one’s self. But now we’re venturing into areas where I think more serious research needs to be done: i.e., would a ‘mental/emotional health’ self give better results than t third-party/therapeutic ‘bot’ from a health care service? these are non-trivial issues with enormous global repercussions and more research is needed.

Let’s look to the NHS, America’s National Institute of mental health and other research agencies to sponsor ‘selves-oriented’ mental health diagnostics and treatment.

Q: How effective are Selves today, in responding to emotional responses vs rational / functional?

A: Well, if one reads Hume, he persuasively argues, that ‘reason is a slave to passions’ – this research domain the entangling of ‘rational’ and ‘affective’ selves is the hottest in neurology, neurophysiology, cognitive psychology and social psychology, which is a long-winded way of saying, the science here not only isn’t settled, it’s barely begun. These are exciting times for how one imagines one’s future selves.

 Q: Are Selves actually a reality today? If not, how far off are we from having AI that will deliver this?

A: I like to say/observe that most of the pieces are already here. They just haven’t been put together in a ’selves-oriented’ way. I believe the focus has been misplaced: we’re optimizing software ‘agents’ at the expense of cultivating effective/affective ’selves portfolios’, I think the future – 2025/6 – will increasingly be about multiple digital selves managing multiple software agents. Today, top decile productive manage multiple devices with multiple apps – some automation-oriented; others augmentation oriented; tomorrow, the most productive managers will manage teams of multiple selves, no, I’m not kidding.

The outstanding open question is whether those selves will be accessed via augmented and virtual reality interfaces versus a ‘new and improved’ mobile ‘phone’.

Conclusion

As you can imagine, the use cases for digital Selves would be extensive; interacting with a digital version of you, to aid in commerce situations, from buying groceries to even talking through the rational of buying your next car.

Selves remind me of a highly advanced version of this Gatebox (below), that I saw at CES, which launched in Japan. But as I said, Selves, if they become reality, would deliver way more benefits than a hologram companion, which I found a bit creepy to be honest.

 

 

It’s still not clear how Selves will come to life, and I would assume it could take any form; MetaHuman, Voice, hologram or abstract, it’s the content they will deliver that’s most important.

As Michael said, we are not there yet, especially with the more emotional decisions, such as the help-centre example. But, with better AI and machine learning, it will not be long before we will see commerce solutions everywhere.

I’m personally looking forward to meeting my digital Selves; I hope we like each other!

Reference Links

Michael Schrage is a research fellow at the MIT Initiative on the Digital Economy (IDE) and the MIT Sloan School of Management, and author of The Innovator’s Hypothesis: How Cheap Experiments Are Worth More Than Good Ideas.

Article on Selves by Michael

Article:

Michael’s white paper: https://ide.mit.edu/wp-content/uploads/2017/03/IDE-Research-Brief_v217.pdf

 Hume: https://plato.stanford.edu/entries/hume-moral/

 Darren’s post on MetaHumans. https://www.linkedin.com/pulse/digital-humans-taking-over-darren-richardson/

 Darren Richardson is a Digital Executive with over 20 years experience in bringing Technology and Creativity closer together for Brands.

Chat with Einstein the digital human

Lately I have taken a massive interest into Digital Humans and all they can offer using AI, machine learning and buckets of data.
This example is spot on using a figure that the world know so well and bringing him back to life for classrooms to ask him questions on all of his amazing work and his life. He responds in a casual way that doesn’t feel too forced enabling you to actually start a conversation, i would recommend using your mic vs typing to give the conversations a natural feel.

Overall I was very impressed when i have my little chat with one of my heros.

Audio content production company Aflorithmic and digital humans company UneeQ teamed up to create a digital version of the famous genius, Albert Einstein. – See the video below and to chat to him click this LINK

Source: Interesting Engineering

Insane Motion with Driving Sim

Mean Gene Hacks has created a motion device that connected with his driving sim gives him real time motion feedback, he does this with a devise that manipulates your nerves making you move is if you were in a real car turning a corner, or in the case of the example in the video, falling off your chair when you crash the car.
The cost to make this motion sim was 50 dollars – he made a video on how here

It turns out a process called galvanic vestibular stimulation—also known as GVS—can be used to alter a human’s sense of balance by electrically stimulating a nerve in the ear using electrodes.

Thermal Camera going mainstream

When I saw this thermal camera add on to your mobile my first question was: why, why is this even a thing unless you’re in law enforcement.
Well it turns out thermal imaging is actually a big thing and being used already in a number of different places, of course law enforcement, but also:
1. Disease control at airports where you can track peoples temperature to detect fevers etc, very important in Covid19 times.
2. Road safety – The BMW 7 Series incorporates an infrared camera to see people or animals beyond the driver’s direct line of sight.
3. Search and rescue – to see through smoke and find people.
4. Pest control – finding unwanted visitors in your home.
5. Health care – Circulation Problems. Thermal scanners can help detect the presence of deep vein thromboses and other circulatory disorders.
6. Home repairs – Like electrical, gas and water to find blockages and leaks.

I found this website that listed 65 uses, even down to Barbecuing to find the optimal temperature to cook.

So it does have a use and it starts at $150 head over to Flir for more information on the product

Bringing Sensors and Holograms together.


I saw this on Linkedin, it’s not a new concept, but its becoming more popular and look at all the usecases for virtual talks / conferences. I talked about textile waste in my last post, this could save $$$ on travel and population.

Back to this tech, I hope it does become more mainstream, not just because it’s cool, but its can be very functional as well.

Found via Jean-Baptiste

AI Predicted Fashion

Finesse, a start up is using AI to take the guess work out of all the fashion waste. It’s estimated that companies burn 13 Million Tons of Textiles per year because they don’t have the data to know how much of a certain product to make.

Finesse want to change that by gathering data on social trends, not the catwalk, but those of social influences, analyzing shares, comments likes and following the trend on certain posts to see where and when and who pushed the trend forward.

with this data they can measure who would actually purchase.

Founder and CEO Ramin Ahmari said via Techcrunch
“In the simplest terms, you can think of what we do as seeing when Kylie posts a picture on Instagram and people go crazy about it … and then you see that happen not just on Kylie’s post but across Instagram, TikTok, Google Trends,” he said. “We predict the establishing of a trend before it goes super viral.”

Links:
Finesse
Crunchbase information

Amazon Alexa auto pilot.

Amazon launched a new service in the US a few days ago where your Alexa devices learns from your habits and routines and starts acting for you.

The best example to explain this would be; you have gone to bed and you forgot to turn off some lights, Alexa will then either ask you if you want the lights off or just automatically turn them off, and all this is based on the habits and data Alexa collected based on your previous interactions.

Scary?! it shouldn’t be, you should have been aware that Amazon has been collecting all this data since you started using the products. there are setting to turn all of this off, but I honestly am thankful for all these added features where they put your data to use for useful solution.

More information over at the Verge.

Entertainment Commerce

I did a panel talk this month chaired by Pat Murphy of MCA and I was asked about eCommerce and how I felt it will evolve and the production behind development.
I used a few examples then this example popped into my head from Kanye West.
Where Kanye wanted to bring Art and Entertainment together with Commerce. The site is a beautiful and engaging, but usability is the main component I was missing after watching the video below.
I believe there is a space for entertainment and commerce to live beautifully together, but we have to make sure the balance is right from entertainment and functionality.

Sadly I have not been able to launch the website and keep getting an error message, which might be because I am located in the UK.

A longer article is here from Fastcompany.

Indy Autonomous Car Race

Yes, its real!
A race happening this year.

Who?
A competition among accredited, tax-exempt colleges and universities (including foreign institutions of higher education that are organized and operated in a manner consistent with requirements for exemption from federal income tax under the laws of the United States) to create software that enables automated-capable racecars to best compete and aspire to finish first in a head-to-head race on the Indianapolis Motor Speedway’s (IMS) famed oval.

Prize?
1 Million dollars to the first team to cross the finish line in 25 minutes or less in a head-to-head, 20-lap race of automated Dallara IL-15 racecars around the Indianapolis Motor Speedway oval

Here is the CES press conference

and Here is the competition website

Coin sized smart home amazon competitor

Josh.ai have developed a few products to compete with the amazon suite, one of these products that caught my eye was the Josh Nano, the reason being that over the past few years smart home devices are always quite large in size, I know they are always trying to make them look futurist and or beautifully designed so they almost look like a sculpture, BUT what if you had a device that had all the same functions but was hidden into you already beautifully designed home.

I’m a big fan of amazon and must admit every room in my house has a device, but if i could have the option to change a few to these hidden Josh Nano’s then I would jump at the chance.

Personally I haven’t trying the tech to give a full review..
*hint hint @josh.ai

AI Powered Parking

A trail project led by Fetch.ai and a blockchain company called Datarella has launched in Munich to A.I a parking facility of a company called Connex.
The purpose behind the technology is to control pricing on the buildings parking spaces and rewarding employees whom don’t use the parking with public transport passes.

“It could say okay if you park closer, you’re going to be charged more; if you park farther away, you’ll be charged less,” says Humayun Sheikh, CEO of Fetch.ai. “We reward you for doing certain actions and we discourage you from doing certain actions.”

This sort of system has shown that putting a price on parking based on demand has shown a reduction in vehicle miles travelled and greenhouse emissions.

From a commercial point of view: Fetch.ai’s approach aims to streamline the process of finding a parking spot using an app that drivers can set to automatically book parking spaces when available, based on predetermined price and location settings.

Full story on FastCompany

Google FREE AI assisted monster maker

This is one of those tools you will spend hours on without getting the result you really wanted.

Google has launched a fun tool that takes your strange sketches and turns them into a kind of 3D monster.

Its fun and I am sure lots of AI and tech behind the scenes, but not something to use in the real world where monster don’t exist, well not looking like these ones.

Check it out if you have a spare 10 mins at chimera

The Future of clothes shopping?!

Zozo have made a second version I assume of their try it on suit, I base this on the fact its called ZozoSuit2.
The suit takes all your measurements along with your mobile device to make sure you don’t have any ill fitting clothes in the future when ordering online.
In a covid world this is genius, while looking into the suit i saw they also do a mat for shoes.

References:
Matt
Suit

Create 3D worlds via speech

Create 3D worlds using half a million different 3D objects.
you can create with and without code and also by recognition speech.

Once your world is created you can view on a number of platforms including VR headsets.

The concept seems great, the telling will be in how the interaction with the worlds works and how easy is it to
manipulate the objects to move and interact with each other.

Product site here {Anything World}

Makeup you can only wear online via L’oreal

As we live a massive amount of our lives online currently the makeup brand L’Oreal have launched the first ever virtual make up that can only be worn online using AR.

Back in 2018 L’Oreal purchased an augmented reality filter company called Modiface which you can now use with the likes of snapchat and insta to place the makeup on the wearer.
The Snap Camera support in particular enables the selfies to be used across plenty of video chat services like Houseparty and Zoom.

The support website is here

VR Meetings – The Future?

With the majority of the world in lockdown and a most people meeting on zoom / teams / skype / Google to hold meetings its taught us a couple of things.
1. we can actually run business and meetings over video
2. it gets a bit lonely.

I saw this post by Vox where recode and spatial had a meeting in mixed reality and looked pretty awesome.

they said it felt a bit awkward at first but 2 minutes in and that faded away.

Could this be the future or at least part of the future?

Being a massive VR fan I hope it does, watch out for Spatial i think we will be seeing more of this team in the future

Camera for the visually impaired

Oren Geva a very talented inventor who won the Asia design prize last year created a camera for the visually impaired.

Quote from the website
2C3D is a camera that enables the blinds to see. The camera, is a development and design of a tactile camera concept for the vision impaired. The camera creates 3D photos and videos and has a 3D screen. The screen, inspired by “Pin Toy,” is built by numerous 3D pixels that shift depending on the photo to forms the 3D shot on the screen surface (giving the term “touch screen” a new and more literal interpretation).

The user can touch the screen while photographing and feel what the camera is seeing, in real time. When the users like what they feels, they can click and save the photo. The saved 3D file can be felt again later. The 2C3D performs as a camera for blind and as physical-digital photo album.

Product website here

Fresh Food? check the Label

Are you the sort of person that throws food out once it reaches the expiry date on the product or are you the sort that takes an educated gamble and goes for it?

Both are risky; firstly you could be throwing out good food.. meaning wasting food.. meaning BAD. Or you could be eating something that will likely come back to visit you, not in a good way.

Most labels give you expiry dates based on worse case scenarios, but we normally put out products in worse case scenarios… normally!

Enter Mimica a product label that can tell you the real expiry of your product via touch, if its good the label is smooth, if bad it goes bumpy.

They are already trailing this on some diary and meat produce, so watch out, if you go to the store and your expiry label has changed, don’t throw it out until the label goes bumpy.

VR & AR enhancing the way we learn

I feel this has way more usecases than just learning which is where zspace seem to be focusing.
Imagine in store and brand experiences, the deconstruction of products and even virtual building and play (Lego).

This is all using current hardware with the zspace addition to bring a VR / AR experience via laptops and desktops.

I would love to demo this product to see if the video holds up to the experience.