60 pippa.io <![CDATA[V.U.X. World]]> https://vux.world en VUX World Kane Simms no Kane Simms info+5a8145b3ac34577e1adf2624@pippa.io episodic https://assets.pippa.io/shows/5a8145b3ac34577e1adf2624/1518421471285-6cb3dbc632c19407c1baaf8c9abffb0d.jpeg https://vux.world <![CDATA[V.U.X. World]]> <![CDATA[All about BotTalk and how to run a voice first discovery workshop with Andrey Esaulov]]> Mon, 21 May 2018 04:00:00 GMT 1:04:46 5afbdbb12faeed8e04002ae5 no full 17 This week, we’re digging into how you can create an Alexa Skill using BotTalk and we give you a template for running a voice first discovery workshop, with SmartHaus Technologies CEO and BotTalk co-founder, Andrey Esaulov.

We discuss the importance of starting with a solid use case and how imperative it is to base your voice app on a real-world scenario that’ll add value to your users.

What turns an average voice experience into an EPIC voice experience? Send us your answers and you could feature on the VUX World Flash Briefing this week!

We then dive deep into the practical detail of how to approach designing a voice first user experience with BotTalk and find out more about the language it’s built in: YAML. We discuss what BotTalk is, how it’s different from some of the other tools on the market, how it works, it’s features and how you can get up and running.

Finally, Andrey takes us through a voice first discovery workshop template that he uses with clients in order to take a brand from zero to hero: from ideation to prototype, and how you can do the same too.

We also traverse some other interesting conversational landscapes such as the concept of skill-first companies: brands that pop up as skills which are the core of the business, like an app is for Instagram. We chat about Artificial Intelligence and how intelligent it actually is in the voice first space. We touch on managing client expectations, monetisation and how voice is making waves in Germany.


About BotTalk

The current selection skill building tools on the market are at opposite ends of the technical spectrum. Some tools require you to know how to code from the ground-up, like Jovo and be a skilled back-end developer. Others have a drag and drop interface and don’t require any coding at all, like Storyline.

BotTalk bridges the gap between those two worlds with a tool that’s aimed at UX designers who have some basic coding knowledge, like HTML and CSS. It provides some of the technical capability you’d expect if you built something from scratch, whilst providing a more simple coding language: YAML. Think of it as HTML for voice.


Our Guest

Andrey Esaulov is the CEO of SmartHaus Technologies, which specialise in growth hacking in the mobile space, and the co-founder of BotTalk, a voice first and bot application building platform.

Andrey has a computer science background, with expensive experience in the start up world and mobile growth space, as well as a PhD in Linguistics and Literacy.

Andrey’s skillset is a perfect match for this industry and his knowledge in this area is vast. Couple his computer science and linguistics knowledge with his skills in working with clients and delivering growth and you’ve got a perfect recipe for success.


Links

Check out BotTalk

Follow Andrey on Twitter

Join the BotTalk Facebook community

Follow BotTalk on Insta

Watch the BotTalk tutorials on YouTube

Visit the Smarthaus Technologies website

Join the Alexa Slack channel

Enable the VUX World Flash Briefing

Feature on this week's Flash Briefing

Where to listen


]]>
This week, we’re digging into how you can create an Alexa Skill using BotTalk and we give you a template for running a voice first discovery workshop, with SmartHaus Technologies CEO and BotTalk co-founder, Andrey Esaulov.

We discuss the importance of starting with a solid use case and how imperative it is to base your voice app on a real-world scenario that’ll add value to your users.

What turns an average voice experience into an EPIC voice experience? Send us your answers and you could feature on the VUX World Flash Briefing this week!

We then dive deep into the practical detail of how to approach designing a voice first user experience with BotTalk and find out more about the language it’s built in: YAML. We discuss what BotTalk is, how it’s different from some of the other tools on the market, how it works, it’s features and how you can get up and running.

Finally, Andrey takes us through a voice first discovery workshop template that he uses with clients in order to take a brand from zero to hero: from ideation to prototype, and how you can do the same too.

We also traverse some other interesting conversational landscapes such as the concept of skill-first companies: brands that pop up as skills which are the core of the business, like an app is for Instagram. We chat about Artificial Intelligence and how intelligent it actually is in the voice first space. We touch on managing client expectations, monetisation and how voice is making waves in Germany.


About BotTalk

The current selection skill building tools on the market are at opposite ends of the technical spectrum. Some tools require you to know how to code from the ground-up, like Jovo and be a skilled back-end developer. Others have a drag and drop interface and don’t require any coding at all, like Storyline.

BotTalk bridges the gap between those two worlds with a tool that’s aimed at UX designers who have some basic coding knowledge, like HTML and CSS. It provides some of the technical capability you’d expect if you built something from scratch, whilst providing a more simple coding language: YAML. Think of it as HTML for voice.


Our Guest

Andrey Esaulov is the CEO of SmartHaus Technologies, which specialise in growth hacking in the mobile space, and the co-founder of BotTalk, a voice first and bot application building platform.

Andrey has a computer science background, with expensive experience in the start up world and mobile growth space, as well as a PhD in Linguistics and Literacy.

Andrey’s skillset is a perfect match for this industry and his knowledge in this area is vast. Couple his computer science and linguistics knowledge with his skills in working with clients and delivering growth and you’ve got a perfect recipe for success.


Links

Check out BotTalk

Follow Andrey on Twitter

Join the BotTalk Facebook community

Follow BotTalk on Insta

Watch the BotTalk tutorials on YouTube

Visit the Smarthaus Technologies website

Join the Alexa Slack channel

Enable the VUX World Flash Briefing

Feature on this week's Flash Briefing

Where to listen


]]>
<![CDATA[All about voice search with the SEO Oracle, Dr. Pete]]> Mon, 14 May 2018 04:00:50 GMT 1:02:13 5af53fc74af21e970211c991 no full 16 Dr. Pete, Marketing Scientist at Moz, and world-leading SEO oracle, tells all about the voice search landscape, and how you can rank for searches on digital assistants like Google Assistant and Amazon Alexa.


This is a jam-packed episode with deep, deep insights, advice and guidance on all things voice search related. We'll give you practical ways to compete to be the answer that’s read out in voice first searches, as well as some notions on the current and potential future benefit that could bring.


Voice search

There are all kinds of stats around voice search, which we’ve touched upon before.


With more people using their voice to search, how will that affect search marketers, content creators and brands?


What’s the difference between a voice search and a typed search?


Is there anything you can do to appear in voice search results?


We speak to one of the search industry's top sources of SEO knowledge, Dr. Pete, to find out.


Getting deep into voice search

In this episode, we’re discussing the differences between voice search on mobile, voice first search on smart speakers and typed search.


We discuss the absence of search engine results pages (SERPs) in a voice first environment and increased competition for the singularity: the top spot in voice search.


We chat about the search landscape, the effect voice is having on search, changing user behaviour and expectations, new search use cases and multi modal implications, challenges and opportunities.


We get into detail about how voice search works on devices such as Google Assistant and Google Home. This includes debating Google’s knowledge graph and it’s advantages and disadvantages in a voice first context.


We look at the practicalities of serving search results via voice. This touches on the different types of search results, such as featured snippets, and how voice handles different data formats such as tables. We get into detail about the different types of featured snippets available and how each translate to work (or not work) on voice.


We discuss Dr. Pete’s work and studies in the voice first space including his piece ‘What I learned from 1,000 voice searches' and what he found.


We wrap up with some practical tips that you can use right now to start preparing for the influx of voice searches that’ll be hitting the air waves soon and help you start to rank in a voice first environment.


Our Guest


Dr. Pete Myers (a.k.a Dr. Pete a.k.a. the Oracle) is the Marketing Scientist at Moz, the SEO giant and search industry leader.


Dr. Pete has been an influential search marketer since 2012 and has spent years studying Google’s search algorithm, advising clients and the SEO industry on best practice and guiding the industry into the future.

His research and writing on the topic has been helping brands keep on top of the search space, improve their rankings and business performance and has helped keep Moz at the top of the industry.


Mozhas been at the top of the SEO chain since 2004 and is trusted by the whole SEO industry as the place to go for SEO tooling, insights and practical guidance.


Links


Where to listen

]]>
Dr. Pete, Marketing Scientist at Moz, and world-leading SEO oracle, tells all about the voice search landscape, and how you can rank for searches on digital assistants like Google Assistant and Amazon Alexa.


This is a jam-packed episode with deep, deep insights, advice and guidance on all things voice search related. We'll give you practical ways to compete to be the answer that’s read out in voice first searches, as well as some notions on the current and potential future benefit that could bring.


Voice search

There are all kinds of stats around voice search, which we’ve touched upon before.


With more people using their voice to search, how will that affect search marketers, content creators and brands?


What’s the difference between a voice search and a typed search?


Is there anything you can do to appear in voice search results?


We speak to one of the search industry's top sources of SEO knowledge, Dr. Pete, to find out.


Getting deep into voice search

In this episode, we’re discussing the differences between voice search on mobile, voice first search on smart speakers and typed search.


We discuss the absence of search engine results pages (SERPs) in a voice first environment and increased competition for the singularity: the top spot in voice search.


We chat about the search landscape, the effect voice is having on search, changing user behaviour and expectations, new search use cases and multi modal implications, challenges and opportunities.


We get into detail about how voice search works on devices such as Google Assistant and Google Home. This includes debating Google’s knowledge graph and it’s advantages and disadvantages in a voice first context.


We look at the practicalities of serving search results via voice. This touches on the different types of search results, such as featured snippets, and how voice handles different data formats such as tables. We get into detail about the different types of featured snippets available and how each translate to work (or not work) on voice.


We discuss Dr. Pete’s work and studies in the voice first space including his piece ‘What I learned from 1,000 voice searches' and what he found.


We wrap up with some practical tips that you can use right now to start preparing for the influx of voice searches that’ll be hitting the air waves soon and help you start to rank in a voice first environment.


Our Guest


Dr. Pete Myers (a.k.a Dr. Pete a.k.a. the Oracle) is the Marketing Scientist at Moz, the SEO giant and search industry leader.


Dr. Pete has been an influential search marketer since 2012 and has spent years studying Google’s search algorithm, advising clients and the SEO industry on best practice and guiding the industry into the future.

His research and writing on the topic has been helping brands keep on top of the search space, improve their rankings and business performance and has helped keep Moz at the top of the industry.


Mozhas been at the top of the SEO chain since 2004 and is trusted by the whole SEO industry as the place to go for SEO tooling, insights and practical guidance.


Links


Where to listen

]]>
<![CDATA[All about Voysis and the GUI to VUI transition with Brian Colcord]]> Mon, 07 May 2018 03:31:00 GMT 52:25 5ae81879840b09bc329898a8 no full 15 We’ve covered plenty of voice first designand developmenton this podcast. Well, that’s what the podcast is, so we’re bound to! Most of what we’ve discussed has largely been voice assistant or smart speaker-focused. We haven’t covered a huge amount of voice first application in the browser and on mobile, until now.


Mic check

You’ll have noticed the little mic symbol popping up on a number of websites lately. It’s in the Google search bar, it’s on websites such as EchoSim and Spotify are trialing it too. When you press that mic symbol, it enables your mic on whatever device you’re using and lets you speak your search term.


Next time you see that mic, you could be looking at the entry point to Voysis.

On a lot of websites, that search may well just use the website’s standard search tool to perform the search. With


Voysis, its engine will perform the search for you using its voice tech stack.


That means that you can perform more elaborate searches that most search engines would struggle with. For example:

“Show me Nike Air Max trainers, size 8, in black, under $150”


Most search engines would freak out at this, but not Voysis. That’s what it does.

Of course, it’s more than an ecommerce search tool, as we’ll find out during this episode.


In this episode

We discuss how approaches to new technology seem to wrongly follow a reincarnation route. Turning print into web by using the same principles that govern print. Turning online into mobile by using the same principles that govern the web. Then taking the practices and principles of GUI and transferring that to VUI. We touch on why moving you app to voice is the wrong approach.


We also discuss:

  • Voysis - what it is and what it does
  • Getting sophisticated with searches
  • Designing purely for voice vs multi modal
  • The challenge of ecommerce with a zero UI
  • The nuance between the GUI assistant and voice only assistants
  • How multi modal voice experiences can help the shopping experience
  • Making the transition from GUI to VUI
  • The similarities between moving from web to mobile and from mobile to voice - (when moving to mobile, you had to think about gestures and smaller screens)
  • Error states and points of delight
  • The difference between designing for voice and designing for a screen
  • Testing for voice
  • Understand voice first ergonomics


Our Guest

Brian Colcord, VP of Design at Voysis, is a world-leading designer, cool, calm and collected speaker and passionate sneaker head.


After designing the early versions of the JoinMe brand markings and UI, he was recruited by LogMeIn and went on to be one of the first designers to work on the Apple Watch prior to its release.


Brian has made the transition from GUI to VUI design and shares with us his passion for voice, how he made the transition, what he learned and how you can do it too.


About Voysis

Voysis is a Dublin-based voice technology company that believes voice interactions can be as natural as human ones and are working intently to give brands the capability to have natural language interactions with customers.


Links


Check out the Voysis website

Follow Voysis on Twitter

Read the Voysis blog

Join Brian on LinkedIn

Follow Brian on Twitter

Listen to the AI in industry podcast with Voysis CEO, Peter Cahill

Read Brian's post, You're already a voice designer, you just don't know it yet


Where to listen


]]>
We’ve covered plenty of voice first designand developmenton this podcast. Well, that’s what the podcast is, so we’re bound to! Most of what we’ve discussed has largely been voice assistant or smart speaker-focused. We haven’t covered a huge amount of voice first application in the browser and on mobile, until now.


Mic check

You’ll have noticed the little mic symbol popping up on a number of websites lately. It’s in the Google search bar, it’s on websites such as EchoSim and Spotify are trialing it too. When you press that mic symbol, it enables your mic on whatever device you’re using and lets you speak your search term.


Next time you see that mic, you could be looking at the entry point to Voysis.

On a lot of websites, that search may well just use the website’s standard search tool to perform the search. With


Voysis, its engine will perform the search for you using its voice tech stack.


That means that you can perform more elaborate searches that most search engines would struggle with. For example:

“Show me Nike Air Max trainers, size 8, in black, under $150”


Most search engines would freak out at this, but not Voysis. That’s what it does.

Of course, it’s more than an ecommerce search tool, as we’ll find out during this episode.


In this episode

We discuss how approaches to new technology seem to wrongly follow a reincarnation route. Turning print into web by using the same principles that govern print. Turning online into mobile by using the same principles that govern the web. Then taking the practices and principles of GUI and transferring that to VUI. We touch on why moving you app to voice is the wrong approach.


We also discuss:

  • Voysis - what it is and what it does
  • Getting sophisticated with searches
  • Designing purely for voice vs multi modal
  • The challenge of ecommerce with a zero UI
  • The nuance between the GUI assistant and voice only assistants
  • How multi modal voice experiences can help the shopping experience
  • Making the transition from GUI to VUI
  • The similarities between moving from web to mobile and from mobile to voice - (when moving to mobile, you had to think about gestures and smaller screens)
  • Error states and points of delight
  • The difference between designing for voice and designing for a screen
  • Testing for voice
  • Understand voice first ergonomics


Our Guest

Brian Colcord, VP of Design at Voysis, is a world-leading designer, cool, calm and collected speaker and passionate sneaker head.


After designing the early versions of the JoinMe brand markings and UI, he was recruited by LogMeIn and went on to be one of the first designers to work on the Apple Watch prior to its release.


Brian has made the transition from GUI to VUI design and shares with us his passion for voice, how he made the transition, what he learned and how you can do it too.


About Voysis

Voysis is a Dublin-based voice technology company that believes voice interactions can be as natural as human ones and are working intently to give brands the capability to have natural language interactions with customers.


Links


Check out the Voysis website

Follow Voysis on Twitter

Read the Voysis blog

Join Brian on LinkedIn

Follow Brian on Twitter

Listen to the AI in industry podcast with Voysis CEO, Peter Cahill

Read Brian's post, You're already a voice designer, you just don't know it yet


Where to listen


]]>
<![CDATA[All about voice first games with Florian Hollandt]]> Mon, 30 Apr 2018 04:00:00 GMT 57:26 5ae44f24928bf5cf7926bc26 no full 14 Voice first games are one of the most popular Amazon Alexa skill categories. So what type of voice games are available? And how do you create them? We speak to game developer and reviewer, Florian Hollandt, to find out.


Games are helping Alexa take off. According to Voicebot.ai, Alexa Skill games are the second most popular skill categorybehind smart home skills. Amazon has been encouraging the development of games, too. We've seen the Alexa Skills Challenge: Kids recently and I'd say it’s more than likely that most of the developer rewards will have gone to game developers, given the engaging nature of games.


We’ve touched upon voice first games on the podcast previously, such as our chat with Jo Jaquinta of Tsa Tsa Tzu, but we haven’t yet covered audio game development in detail, which is what we’ll do today.


Creating voice first games

In this episode, we’ll be getting into detail about the different kids of voice first games that are out there, as well as looking at some of the techniques you can use to create engaging games such as interactive stories.


We’ll cover things like:

  • Naming a game and how a name can reduce discoverability
  • The challenge of providing content
  • The one game per month challenge
  • The types of games that are available on Amazon Alexa
  • Game design techniques
  • Interactive story game development techniques
  • Fake decisions - what are they and how can you use them to enhance engagement


Our Guest

Florian Hollandt is the Product Manager at Jovo, the cross platform voice app platform, and is also an Alexa game developer and reviewer. He’s created some popular games on Alexa, such as the German card game, Mau Mau, and has written a ton of voice first game reviews on Medium.


Florian is madly passionate about voice first games and his knowledge on the subject is impressive. He guides us through his experience and shares some delightful tips on how you can start creating voice first games yourself.


Links

Some of the things Florian spoke about:


]]>
Voice first games are one of the most popular Amazon Alexa skill categories. So what type of voice games are available? And how do you create them? We speak to game developer and reviewer, Florian Hollandt, to find out.


Games are helping Alexa take off. According to Voicebot.ai, Alexa Skill games are the second most popular skill categorybehind smart home skills. Amazon has been encouraging the development of games, too. We've seen the Alexa Skills Challenge: Kids recently and I'd say it’s more than likely that most of the developer rewards will have gone to game developers, given the engaging nature of games.


We’ve touched upon voice first games on the podcast previously, such as our chat with Jo Jaquinta of Tsa Tsa Tzu, but we haven’t yet covered audio game development in detail, which is what we’ll do today.


Creating voice first games

In this episode, we’ll be getting into detail about the different kids of voice first games that are out there, as well as looking at some of the techniques you can use to create engaging games such as interactive stories.


We’ll cover things like:

  • Naming a game and how a name can reduce discoverability
  • The challenge of providing content
  • The one game per month challenge
  • The types of games that are available on Amazon Alexa
  • Game design techniques
  • Interactive story game development techniques
  • Fake decisions - what are they and how can you use them to enhance engagement


Our Guest

Florian Hollandt is the Product Manager at Jovo, the cross platform voice app platform, and is also an Alexa game developer and reviewer. He’s created some popular games on Alexa, such as the German card game, Mau Mau, and has written a ton of voice first game reviews on Medium.


Florian is madly passionate about voice first games and his knowledge on the subject is impressive. He guides us through his experience and shares some delightful tips on how you can start creating voice first games yourself.


Links

Some of the things Florian spoke about:


]]>
<![CDATA[Turning Alexa for Business into a business with Bob Stolzberg]]> Mon, 23 Apr 2018 04:00:00 GMT 51:13 5ad98c744b7b3f235adbc971 no full 13 Today, we’re following the story of the inspirational Bob Stolzberg of VoiceXP, and giving you some deep insights into how you can turn Alexa for Business into a business.


In this episode, Dustin and I are getting into the detail of how VoiceXP came to be, how Bob almost made $14,500 profit from his first Alexa Skill, why voice is such a big opportunity and how he turned Alexa for Business into a business.


We’re also discussing the features that come with Amazon Alexa for Business and some example use cases taken from Bob’s experience, as well as plenty of other areas such as:


  • Selling to corporate clients
  • The difference between a skill builder and a business
  • The risk of using amazon alexa in business
  • Security concerns and DR compliance
  • The risks that corporate clients face and mitigations
  • The importance of being a Amazon partner
  • Private vs public skills
  • Locking down devices
  • Use cases and future use cases
  • Reporting and analytics
  • Agnostic roadmaps
  • The hard work required to start a startup


Our Guest

After spending 20 years working in the enterprise IT field, Bob Stolzberg founded VoiceXP, the voice first company that helps businesses create efficiencies and increase productivity through voice. Bob and his team work with enterprise clients and SMEs to implement Alexa for Business within organisations. From designing and building specific skills for clients, to the full implementation of the devices and platform.


Bob’s experience of the enterprise IT environment gives him a unique understanding of the corporate IT world, the kind of people that make purchasing decisions and the kind of risks or concerns IT professionals will perceive with new technology platforms such as this. He’s managed to overcome those concerns, mitigate those risks and build a thriving business that’s just joined one of the top startup accelerators in the US, Capital Innovators.


Bob’s an immensely engaging and passionate guy, and offers some amazing guidance and pointers for anyone looking to turn voice into a business. This is a truly inspirational listen.


Links

 


Where to listen


]]>
Today, we’re following the story of the inspirational Bob Stolzberg of VoiceXP, and giving you some deep insights into how you can turn Alexa for Business into a business.


In this episode, Dustin and I are getting into the detail of how VoiceXP came to be, how Bob almost made $14,500 profit from his first Alexa Skill, why voice is such a big opportunity and how he turned Alexa for Business into a business.


We’re also discussing the features that come with Amazon Alexa for Business and some example use cases taken from Bob’s experience, as well as plenty of other areas such as:


  • Selling to corporate clients
  • The difference between a skill builder and a business
  • The risk of using amazon alexa in business
  • Security concerns and DR compliance
  • The risks that corporate clients face and mitigations
  • The importance of being a Amazon partner
  • Private vs public skills
  • Locking down devices
  • Use cases and future use cases
  • Reporting and analytics
  • Agnostic roadmaps
  • The hard work required to start a startup


Our Guest

After spending 20 years working in the enterprise IT field, Bob Stolzberg founded VoiceXP, the voice first company that helps businesses create efficiencies and increase productivity through voice. Bob and his team work with enterprise clients and SMEs to implement Alexa for Business within organisations. From designing and building specific skills for clients, to the full implementation of the devices and platform.


Bob’s experience of the enterprise IT environment gives him a unique understanding of the corporate IT world, the kind of people that make purchasing decisions and the kind of risks or concerns IT professionals will perceive with new technology platforms such as this. He’s managed to overcome those concerns, mitigate those risks and build a thriving business that’s just joined one of the top startup accelerators in the US, Capital Innovators.


Bob’s an immensely engaging and passionate guy, and offers some amazing guidance and pointers for anyone looking to turn voice into a business. This is a truly inspirational listen.


Links

 


Where to listen


]]>
<![CDATA[How people REALLY use Amazon Alexa with Martin Porcheron]]> Mon, 16 Apr 2018 05:55:45 GMT 1:10:37 5ad43ae1c51eb4310908e832 no full Today, we’re discussing the findings of Martin Porcheron’s study, ‘Voice interfaces in everyday life’. We uncover insights into how people actually use Amazon Alexa in the home. We find unique user behaviour, new technology challenges and understand what it all means for voice UX designers, developers and brands.


Voice interfaces in everyday life

Imagine if you could eaves drop into someone's house and listen to how they interact with their Amazon Echo. Imagine, whenever someone said “Alexa”, you were there. Imagine being able to hear everything thing that was said for an entire minute before the word “Alexa” was uttered, and then stick around for a whole 60 seconds after the interaction with Alexa was over.

Well, that’s exactly what today’s guest and his associates did, and his findings offer some unique lessons for VUX designers, developers and brands that’ll help you create more natural voice user experiences that work.


In this episode, we’re discussing:

  • How people use digital assistants in public
  • The background of Voice interfaces in everyday life
  • The challenge of what you call your Alexa skill
  • The issue of recall
  • How Amazon can improve skill usage
  • The inherent problem of discoverability in voice
  • How Echo use is finely integrated into other activities
  • The implications of treating an Echo as a single user device
  • The challenge of speech recognition in the ‘hurly burly’ of moderns life
  • How people collaboratively attempt to solve interaction problems
  • What is ‘political’ control and how does it apply to voice first devices?
  • Pranking people’s Alexa and the effect on future Amazon advertising
  • Designing for device control
  • Why these devices aren’t actually conversational
  • The importance of responses

Key takeaways for designers and developers

  • Give your skill a name that’s easy for recall
  • Make your responses succinct, fit within a busy and crowded environment
  • Make sure your responses are a resource for further action - how will the user do the next thing?
  • Consider designing for multiple users
  • Don’t use long intros and tutorials, get straight to the point
  • Don’t design for a conversation, design to get things done

Our Guest

Martin Porcheron is a Research Associate in the Mixed Reality Lab at the University of Nottingham and has a PhD in Ubiquitous Computing, a sub-set of Computer Science. Martin has conducted several studies in the field of human-computer interaction, including looking at how people make use of mobile phones in conversations i.e. how people use something like Siri mid-conversation and how those interactions unfold.

Martin’s angle isn’t to look at these things as critical or problematic, but to approach them as an opportunity to learn about how people make use of technology currently. He believe this enables us to make more informed design decisions.

The study we discuss today has won many plaudits including Best Paper Award at the CHI 2018 conference.


Links


Where you can listen:


]]>
Today, we’re discussing the findings of Martin Porcheron’s study, ‘Voice interfaces in everyday life’. We uncover insights into how people actually use Amazon Alexa in the home. We find unique user behaviour, new technology challenges and understand what it all means for voice UX designers, developers and brands.


Voice interfaces in everyday life

Imagine if you could eaves drop into someone's house and listen to how they interact with their Amazon Echo. Imagine, whenever someone said “Alexa”, you were there. Imagine being able to hear everything thing that was said for an entire minute before the word “Alexa” was uttered, and then stick around for a whole 60 seconds after the interaction with Alexa was over.

Well, that’s exactly what today’s guest and his associates did, and his findings offer some unique lessons for VUX designers, developers and brands that’ll help you create more natural voice user experiences that work.


In this episode, we’re discussing:

  • How people use digital assistants in public
  • The background of Voice interfaces in everyday life
  • The challenge of what you call your Alexa skill
  • The issue of recall
  • How Amazon can improve skill usage
  • The inherent problem of discoverability in voice
  • How Echo use is finely integrated into other activities
  • The implications of treating an Echo as a single user device
  • The challenge of speech recognition in the ‘hurly burly’ of moderns life
  • How people collaboratively attempt to solve interaction problems
  • What is ‘political’ control and how does it apply to voice first devices?
  • Pranking people’s Alexa and the effect on future Amazon advertising
  • Designing for device control
  • Why these devices aren’t actually conversational
  • The importance of responses

Key takeaways for designers and developers

  • Give your skill a name that’s easy for recall
  • Make your responses succinct, fit within a busy and crowded environment
  • Make sure your responses are a resource for further action - how will the user do the next thing?
  • Consider designing for multiple users
  • Don’t use long intros and tutorials, get straight to the point
  • Don’t design for a conversation, design to get things done

Our Guest

Martin Porcheron is a Research Associate in the Mixed Reality Lab at the University of Nottingham and has a PhD in Ubiquitous Computing, a sub-set of Computer Science. Martin has conducted several studies in the field of human-computer interaction, including looking at how people make use of mobile phones in conversations i.e. how people use something like Siri mid-conversation and how those interactions unfold.

Martin’s angle isn’t to look at these things as critical or problematic, but to approach them as an opportunity to learn about how people make use of technology currently. He believe this enables us to make more informed design decisions.

The study we discuss today has won many plaudits including Best Paper Award at the CHI 2018 conference.


Links


Where you can listen:


]]>
<![CDATA[Tackling the challenges of discoverability and monetisation on Amazon Alexa with Jo Jaquinta]]> Mon, 09 Apr 2018 05:28:59 GMT 1:19:09 5acaf9b282cc21353715603b no full Today, we're getting deep into the biggest challenges facing designers and developers on the Alexa platform: being discovered and making money. And who better to take us through it, than one of the most experienced developers on the voice scene, Jo 'the Oracle' Jaquinta.


Speak to anyone who's serious about voice first development and they'll tell you the two biggest challenges facing the voice first world right now are skill discoverability and monetisation. Vasili Shynkarenka of Storyline mentioned it and so did Matt Hartman of Betaworks when they featured on the VUX World podcast previously.


However, we rarely hear stories from people who've tried everything they can to overcome these challenges. Until now.


In this episode, we're joined by Dustin Coates as co-host and we're speaking to Jo about his vast experience of designing and developing on the Amazon Alexa platform and how he's approached tackling those two big challenges.


We also discuss voice UX design techniques that Jo's picked up along the way, as well as the tools and techniques he uses for developing skills.


This one is jam-packed with epic insights from someone who few know more than in this space right now, and includes discussion on a vast array of subjects including:


Discoverability:

  • The impact of advertising on increasing skill adoption
  • The effect of being featured in the Amazon Alexa newsletter
  • What Amazon can do to help skill discovery
  • How transferring between modalities can loose users


Monetisation:

  • The challenges of turning skill development into a business
  • The difference between Google’s and Amazon’s strategy
  • The two ways to make money from voice: the easy way and the hard way
  • Why a monetisation API shouldn't be the focus for developers
  • Why Amazon Alexa developer payouts are bad for the voice environment


Design:

  • The challenges of designing for voice with a screen
  • How immersive audio games help the visually impaired
  • How Amazon could improve the UX for users by moving to a 'streaming' approach to voice
  • Why you shouldn’t be aiming for a ‘conversational’ experience
  • What is the method of Loci and how can it be used when designing for voice?


Development:

  • Fuzzy matching
  • Building and maintaining your own library and SDK
  • Cross platform development


Other gems include:

  • Structural problems with the Alexa platform
  • How company culture affects voice strategy
  • Why it’s not early days in voice
  • Alexa for business and privacy


Our Guest

Jo Jaquinta is a software developer with over 20 years' experience. Jo started building skills on the Alexa platform a short time after it was released, has created a host of interesting skills and learned plenty along the way through pulling Alexa in all kinds of different directions. His knowledge, experience and plenty of lessons learned were all applied in building Jo's most recent skill, the madly complex, 6 Swords.


Jo shares plenty of his voice design and development knowledge on his YouTube channel, which is full of engaging and interesting insights, and has put pen to paper to share his knowledge in the shape of two books on Alexa: How to Program Amazon Echo and Developing Amazon Alexa Games. He's also active on the Alexa Slack channel, helping people solve their development problems and consulting on voice design and development.


What Jo doesn't know about developing on Alexa isn't worth knowing. His immense knowledge and vast experience in this area are pretty much unrivalled, which is why I refer to him as 'the Oracle'.




Links


Where to Listen:

]]>
Today, we're getting deep into the biggest challenges facing designers and developers on the Alexa platform: being discovered and making money. And who better to take us through it, than one of the most experienced developers on the voice scene, Jo 'the Oracle' Jaquinta.


Speak to anyone who's serious about voice first development and they'll tell you the two biggest challenges facing the voice first world right now are skill discoverability and monetisation. Vasili Shynkarenka of Storyline mentioned it and so did Matt Hartman of Betaworks when they featured on the VUX World podcast previously.


However, we rarely hear stories from people who've tried everything they can to overcome these challenges. Until now.


In this episode, we're joined by Dustin Coates as co-host and we're speaking to Jo about his vast experience of designing and developing on the Amazon Alexa platform and how he's approached tackling those two big challenges.


We also discuss voice UX design techniques that Jo's picked up along the way, as well as the tools and techniques he uses for developing skills.


This one is jam-packed with epic insights from someone who few know more than in this space right now, and includes discussion on a vast array of subjects including:


Discoverability:

  • The impact of advertising on increasing skill adoption
  • The effect of being featured in the Amazon Alexa newsletter
  • What Amazon can do to help skill discovery
  • How transferring between modalities can loose users


Monetisation:

  • The challenges of turning skill development into a business
  • The difference between Google’s and Amazon’s strategy
  • The two ways to make money from voice: the easy way and the hard way
  • Why a monetisation API shouldn't be the focus for developers
  • Why Amazon Alexa developer payouts are bad for the voice environment


Design:

  • The challenges of designing for voice with a screen
  • How immersive audio games help the visually impaired
  • How Amazon could improve the UX for users by moving to a 'streaming' approach to voice
  • Why you shouldn’t be aiming for a ‘conversational’ experience
  • What is the method of Loci and how can it be used when designing for voice?


Development:

  • Fuzzy matching
  • Building and maintaining your own library and SDK
  • Cross platform development


Other gems include:

  • Structural problems with the Alexa platform
  • How company culture affects voice strategy
  • Why it’s not early days in voice
  • Alexa for business and privacy


Our Guest

Jo Jaquinta is a software developer with over 20 years' experience. Jo started building skills on the Alexa platform a short time after it was released, has created a host of interesting skills and learned plenty along the way through pulling Alexa in all kinds of different directions. His knowledge, experience and plenty of lessons learned were all applied in building Jo's most recent skill, the madly complex, 6 Swords.


Jo shares plenty of his voice design and development knowledge on his YouTube channel, which is full of engaging and interesting insights, and has put pen to paper to share his knowledge in the shape of two books on Alexa: How to Program Amazon Echo and Developing Amazon Alexa Games. He's also active on the Alexa Slack channel, helping people solve their development problems and consulting on voice design and development.


What Jo doesn't know about developing on Alexa isn't worth knowing. His immense knowledge and vast experience in this area are pretty much unrivalled, which is why I refer to him as 'the Oracle'.




Links


Where to Listen:

]]>
<![CDATA[My first 30 days as a VUI designer with Ilana Shalowitz and Brian Bauman]]> Mon, 02 Apr 2018 05:57:30 GMT 1:00:56 5ac1c64a76380bf04c806d91 no full Today, we’re getting into detail about what it’s like to be a full-time VUI designer. We’re discussing the details of the role, the day to day duties and the skillsets that are important to succeed in designing voice user interfaces.

The role of a VUI designer has been around for a while, but it’s not so common. However, with the rise of voice as an access point for controlling technology, this is one of the roles of the future.


If you’re planning for that future and are considering seeking work in the voice first space; or if you’re a voice first design hobbyist looking to take it full-time; or if you’re generally interested in what it takes to create conversational interfaces, then this is a great episode for you.


We’re joined by two professional VUI designers, Ilana Shalowitz and Brian Bauman of Emmi, and together they’ll be taking us through the ins and outs of the role that designs voice user interfaces for Emmi’s care calls.


In this episode

Ilana takes us through an overview of the VUI designer role and discusses what skillsets are important. She takes us through the interview process, bedding in, and drops some detailed knowledge voice user interface design based on her years of experience in the field.


Brian then takes us through the role in more detail and looks at the specifics of the role, where a VUI designer fits into a project, what the day to day activities and duties are, and what he found during his first 30 days.


We also discuss things like:

  • How to pronounce VUI (V.U.I. or "Vooey")
  • The difference between chat bot design and conversational vui
  • What is prosity and why is it important
  • Language
  • Breathing
  • Error recovery
  • Directing voice talent
  • Reporting and measuring success
  • Broader voice user interface design tips


Our guests

Ilana Shalowitz is the VUI Design Manager at Emmi and has a background in marketing and design. Ilana is forming a great reputation in the voice first space and is quickly becoming a leading voice for voice in the healthcare sector. She featured at the Alexa Conference 2018, spoke at the AI Summit 2018, has featured on the VoiceFirst.FM Voice of Healthcare podcast (Episode 5) and is a keynotes speaker at the Voice of Healthcare Summit in August in Boston.


Brian Bauman is a former playwright and joined Emmi recently, taking on his first role as a VUI designer. Brian has a background in the creative arts and is a former playwright. He fills us in on what his first month as a VUI designer was like and how his creative background gave him some valuable transferable skills.


About Emmi

Emmi solutions is part of the Wolters Kluwer stable and helps care organisations extend the reach of their care through using technology.


Ilana and Brian both wore on the automated voice-based outbound calls side of the company. They create call scripts and dialogue flows that are turned into real calls that patients receive and can interact with in conversation. This means that healthcare providers can speak to thousands of patients without needing make make any manual calls at all.


Links


]]>
Today, we’re getting into detail about what it’s like to be a full-time VUI designer. We’re discussing the details of the role, the day to day duties and the skillsets that are important to succeed in designing voice user interfaces.

The role of a VUI designer has been around for a while, but it’s not so common. However, with the rise of voice as an access point for controlling technology, this is one of the roles of the future.


If you’re planning for that future and are considering seeking work in the voice first space; or if you’re a voice first design hobbyist looking to take it full-time; or if you’re generally interested in what it takes to create conversational interfaces, then this is a great episode for you.


We’re joined by two professional VUI designers, Ilana Shalowitz and Brian Bauman of Emmi, and together they’ll be taking us through the ins and outs of the role that designs voice user interfaces for Emmi’s care calls.


In this episode

Ilana takes us through an overview of the VUI designer role and discusses what skillsets are important. She takes us through the interview process, bedding in, and drops some detailed knowledge voice user interface design based on her years of experience in the field.


Brian then takes us through the role in more detail and looks at the specifics of the role, where a VUI designer fits into a project, what the day to day activities and duties are, and what he found during his first 30 days.


We also discuss things like:

  • How to pronounce VUI (V.U.I. or "Vooey")
  • The difference between chat bot design and conversational vui
  • What is prosity and why is it important
  • Language
  • Breathing
  • Error recovery
  • Directing voice talent
  • Reporting and measuring success
  • Broader voice user interface design tips


Our guests

Ilana Shalowitz is the VUI Design Manager at Emmi and has a background in marketing and design. Ilana is forming a great reputation in the voice first space and is quickly becoming a leading voice for voice in the healthcare sector. She featured at the Alexa Conference 2018, spoke at the AI Summit 2018, has featured on the VoiceFirst.FM Voice of Healthcare podcast (Episode 5) and is a keynotes speaker at the Voice of Healthcare Summit in August in Boston.


Brian Bauman is a former playwright and joined Emmi recently, taking on his first role as a VUI designer. Brian has a background in the creative arts and is a former playwright. He fills us in on what his first month as a VUI designer was like and how his creative background gave him some valuable transferable skills.


About Emmi

Emmi solutions is part of the Wolters Kluwer stable and helps care organisations extend the reach of their care through using technology.


Ilana and Brian both wore on the automated voice-based outbound calls side of the company. They create call scripts and dialogue flows that are turned into real calls that patients receive and can interact with in conversation. This means that healthcare providers can speak to thousands of patients without needing make make any manual calls at all.


Links


]]>
<![CDATA[Voice first user research with Konstantin Samoylov and Adam Banks]]> Mon, 26 Mar 2018 08:28:12 GMT 1:18:12 5ab8af1d791733c7782ca84d no full We’re talking to ex-Googlers, Konstantin Samoylov and Adam Banks, about their findings from conducting research on voice assistants at Google and their approach to building world-leading UX labs.

This episode is a whirlwind of insights, practical advice and engaging anecdotes that cover the width and breadth of user research and user behaviour in the voice first and voice assistant space. It’s littered with examples of user behaviour found when researching voice at Google and peppered with guidance on how to create world-class user research spaces.

Some of the things we discuss include:

  • Findings from countless voice assistant studies at Google
  • Real user behaviour in the on-boarding process
  • User trust of voice assistants
  • What people expect from voice assistants
  • User mental models when using voice assistants
  • The difference between replicating your app and designing for voice
  • The difference between a voice assistant and a voice interface
  • The difference between user expectations and reality
  • How voice assistant responses can shape people’s expectations of the full functionality of the thing
  • What makes a good UX lab
  • How to design a user research space
  • How voice will disrupt and challenge organisational structure
  • Is there a place for advertising on voice assistants?
  • Mistakes people make when seeking a voice presence (Hint: starting with ‘let’s create an Alexa Skill’ rather than ‘how will
  • people interact with our brand via voice?’)
  • The importance (or lack of) of speed in voice user interfaces?
  • How to fit voice user research into a design sprint

Plus, for those of you watching on YouTube, we have a tour of the UX Lab in a Box!


Our Guests

Konstantin Samoylov and Adam Banks are world-leading user researchers and research lab creators, and founders of user research consultancy firm, UX Study.

The duo left Google in 2016 after pioneering studies in virtual assistants and voice, as well as designing and creating over 50 user research labs across the globe, and managing the entirety of Google’s global user research spaces.

While working as researchers and lab builders at Google, and showing companies their research spaces, plenty of companies used to ask Konstantin and Adam whether they can recommend a company to build them a similar lab. Upon realising that company doesn’t exist, they set about creating it!

UX Study designs and builds research and design spaces for companies, provides research consultancy services and training, as well as hires and sells its signature product, UX Lab in a Box.


UX Lab in a Box

The Lab in a Box, http://ux-study.com/products/lab-in-a-box/ is an audio and video recording, mixing and broadcasting unit designed specifically to help user researchers conduct reliable, consistent and speedy studies.

It converts any space into a user research lab in minutes and helps researchers focus on the most important aspect of their role - research!

It was born after the duo, in true researcher style, conducted user research on user researchers and found that 30% of a researchers time is spent fiddling with cables, setting up studies, editing video and generally faffing around doing things that aren’t research!


Konstantin Samoylov

Konstantin Samoylov is an award-winning user researcher. He has nearly 20 years’ experience in the field and has conducted over 1000 user research studies.

He was part of the team that pioneered voice at Google and was the first researcher to focus on voice dialogues and actions. By the time he left, just 2 years ago, most of the studies into user behaviour on voice assistants at Google were conducted by him.


Adam Banks

It’s likely that Adam Banks has more experience in creating user research spaces than anyone else on the planet. He designed, built and managed all of Google’s user research labs globally including the newly-opened ‘Userplex’ in San Francisco.

He’s created over 50 research and design spaces across the globe for Google, and also has vast experience in conducting user research himself.


Links

Visit the UX Study website

Follow UX Study on Twitter

Check out the UX Lab in a Box

Follow Kostantin on Twitter

Follow Adam on Twitter

]]>
We’re talking to ex-Googlers, Konstantin Samoylov and Adam Banks, about their findings from conducting research on voice assistants at Google and their approach to building world-leading UX labs.

This episode is a whirlwind of insights, practical advice and engaging anecdotes that cover the width and breadth of user research and user behaviour in the voice first and voice assistant space. It’s littered with examples of user behaviour found when researching voice at Google and peppered with guidance on how to create world-class user research spaces.

Some of the things we discuss include:

  • Findings from countless voice assistant studies at Google
  • Real user behaviour in the on-boarding process
  • User trust of voice assistants
  • What people expect from voice assistants
  • User mental models when using voice assistants
  • The difference between replicating your app and designing for voice
  • The difference between a voice assistant and a voice interface
  • The difference between user expectations and reality
  • How voice assistant responses can shape people’s expectations of the full functionality of the thing
  • What makes a good UX lab
  • How to design a user research space
  • How voice will disrupt and challenge organisational structure
  • Is there a place for advertising on voice assistants?
  • Mistakes people make when seeking a voice presence (Hint: starting with ‘let’s create an Alexa Skill’ rather than ‘how will
  • people interact with our brand via voice?’)
  • The importance (or lack of) of speed in voice user interfaces?
  • How to fit voice user research into a design sprint

Plus, for those of you watching on YouTube, we have a tour of the UX Lab in a Box!


Our Guests

Konstantin Samoylov and Adam Banks are world-leading user researchers and research lab creators, and founders of user research consultancy firm, UX Study.

The duo left Google in 2016 after pioneering studies in virtual assistants and voice, as well as designing and creating over 50 user research labs across the globe, and managing the entirety of Google’s global user research spaces.

While working as researchers and lab builders at Google, and showing companies their research spaces, plenty of companies used to ask Konstantin and Adam whether they can recommend a company to build them a similar lab. Upon realising that company doesn’t exist, they set about creating it!

UX Study designs and builds research and design spaces for companies, provides research consultancy services and training, as well as hires and sells its signature product, UX Lab in a Box.


UX Lab in a Box

The Lab in a Box, http://ux-study.com/products/lab-in-a-box/ is an audio and video recording, mixing and broadcasting unit designed specifically to help user researchers conduct reliable, consistent and speedy studies.

It converts any space into a user research lab in minutes and helps researchers focus on the most important aspect of their role - research!

It was born after the duo, in true researcher style, conducted user research on user researchers and found that 30% of a researchers time is spent fiddling with cables, setting up studies, editing video and generally faffing around doing things that aren’t research!


Konstantin Samoylov

Konstantin Samoylov is an award-winning user researcher. He has nearly 20 years’ experience in the field and has conducted over 1000 user research studies.

He was part of the team that pioneered voice at Google and was the first researcher to focus on voice dialogues and actions. By the time he left, just 2 years ago, most of the studies into user behaviour on voice assistants at Google were conducted by him.


Adam Banks

It’s likely that Adam Banks has more experience in creating user research spaces than anyone else on the planet. He designed, built and managed all of Google’s user research labs globally including the newly-opened ‘Userplex’ in San Francisco.

He’s created over 50 research and design spaces across the globe for Google, and also has vast experience in conducting user research himself.


Links

Visit the UX Study website

Follow UX Study on Twitter

Check out the UX Lab in a Box

Follow Kostantin on Twitter

Follow Adam on Twitter

]]>
<![CDATA[Hearing voices: a strategic view of the voice space with Matt Hartman]]> Mon, 19 Mar 2018 05:00:00 GMT 48:28 5aab80b1f1b0453a67c92e21 no full This week, Dustin and I are joined by Matt Hartman, partner at Betaworks, curator of the Hearing Voices newsletter and creator of the Wiffy Alexa Skill.


In this episode, we’re discussing:


  • All about Betaworks
  • A strategic vision for voice
  • Changing user behaviour
  • On-demand interfaces
  • Friction and psychological friction
  • How context influences your design interface
  • The 2 types of companies that’ll get built on voice platforms
  • Differences between GUI and VUI design
  • Voice camp
  • The Wiffy Alexa Skill
  • Lessons learned building your first Alexa Skill
  • Text message on-boarding
  • Challenges in the voice space


Our Guest, Matt Hartman

Matt Hartman has been with Betaworks for the past 4 years and handles the investment side of the company. Matt spends his days with his ear to the ground, meeting company founders and entrepreneurs, searching for the next big investment opportunities.


Paying attention to trends in user behaviour and searching for the next new wave of technology that will change the way people communicate has led Matt and Betaworks to focus on the voice space.


Matt has developed immense knowledge and passion for voice and is a true visionary. He totally gets the current state of play in the voice space and is a true design thinker. He has an entirely different and unique perspective on the voice scene: the voice ecosystem, voice strategy, user behaviour trends, challenges and the future of the industry.


Matt curates the Hearing Voices newsletter to share his reading with the rest of the voice space and created the Wiffy Alexa Skill, which lets you ask Alexa for the Wifi password. It’s one of the few Skills that receives the fabled Alexa Developer Reward.


Betaworks

Betaworks is a startup platform that builds products like bit.ly, Chartbeat and GIPHY. It invests in companies like Tumblr, Kickstarter and Medium and has recently turned its attention to audio and voice platforms such as Anchor, Breaker and Gimlet.


As part of voice camp in 2017, Betaworks invested in a host of voice-first companies including Jovo, who featured on episode 5 of the VUX World podcast, as well as Spoken Layer, Shine and John Done, which conversational AI guru, Jeff Smith (episode 4), was involved in.


Links


]]>
This week, Dustin and I are joined by Matt Hartman, partner at Betaworks, curator of the Hearing Voices newsletter and creator of the Wiffy Alexa Skill.


In this episode, we’re discussing:


  • All about Betaworks
  • A strategic vision for voice
  • Changing user behaviour
  • On-demand interfaces
  • Friction and psychological friction
  • How context influences your design interface
  • The 2 types of companies that’ll get built on voice platforms
  • Differences between GUI and VUI design
  • Voice camp
  • The Wiffy Alexa Skill
  • Lessons learned building your first Alexa Skill
  • Text message on-boarding
  • Challenges in the voice space


Our Guest, Matt Hartman

Matt Hartman has been with Betaworks for the past 4 years and handles the investment side of the company. Matt spends his days with his ear to the ground, meeting company founders and entrepreneurs, searching for the next big investment opportunities.


Paying attention to trends in user behaviour and searching for the next new wave of technology that will change the way people communicate has led Matt and Betaworks to focus on the voice space.


Matt has developed immense knowledge and passion for voice and is a true visionary. He totally gets the current state of play in the voice space and is a true design thinker. He has an entirely different and unique perspective on the voice scene: the voice ecosystem, voice strategy, user behaviour trends, challenges and the future of the industry.


Matt curates the Hearing Voices newsletter to share his reading with the rest of the voice space and created the Wiffy Alexa Skill, which lets you ask Alexa for the Wifi password. It’s one of the few Skills that receives the fabled Alexa Developer Reward.


Betaworks

Betaworks is a startup platform that builds products like bit.ly, Chartbeat and GIPHY. It invests in companies like Tumblr, Kickstarter and Medium and has recently turned its attention to audio and voice platforms such as Anchor, Breaker and Gimlet.


As part of voice camp in 2017, Betaworks invested in a host of voice-first companies including Jovo, who featured on episode 5 of the VUX World podcast, as well as Spoken Layer, Shine and John Done, which conversational AI guru, Jeff Smith (episode 4), was involved in.


Links


]]>
<![CDATA[All about Mycroft with Joshua Montgomery, Steve Penrod and Derick Schweppe]]> Mon, 12 Mar 2018 05:29:00 GMT 1:20:00 5aa46b670185f54f5332bca3 no full This week, we’re joined by the Mycroft AI team, and we’re getting deep into designing and developing on the open source alternative to Amazon Alexa and Google Assistant.

If you’ve tried creating voice apps on platforms such as Amazon Alexa and Google Assistant, then you’ll do doubt be familiar with their current limitations. Push notifications, monetisation and all-round flexibility generally leave plenty to be desired.

What if there was an alternative? A platform that really did let you create whatever you wanted. Something that'll let you monetise. Something completely open to being used in a way that you want to use it.

Well, that’s what the team at Mycroft AI have built.




What is Mycroft AI?

Mycroft AI is the world’s first open source voice assistant that runs anywhere. On desktop, mobile, smart speakers. In cars, fridges, and washing machines. You name it. You can put it where you like and do with it what you like as well.

One member of the Mycroft community has hooked the platform up to a webcam and created a facial recognition feature that uses a persons face instead of a wake word. When you look at the camera, the speaker wakes and is ready for you to speak to it!

As well as being open source and flexible, if you create something exceptional, then it could even become the default skill for that feature on the platform. That’s like you creating a really great weather skill on Alexa and Amazon using that as the default way to tell people the weather!

Plus, your personal data is kept totally private.

And Mycroft aren’t just creating cool software, they have a range of smart speakers as well. The Mark I speaker is on sale now and the Mark II is on Indiegogo right now.




Our Guests

Today, we’re joined by Joshua Montgomery, CEO; Steve Penrod, CTO; and Derick Schweppe, CDO to talk all things Mycroft AI.

We’re also joined again by co-host, Dustin Coates, and we’re getting into detail about:

  • Where Mycroft AI came from and the company’s vision for voice and AI
  • The differences between Mycroft and the other players such as Alexa and Google Assistant
  • The value of an open source voice assistant
  • About the platform (how it works, how you can get up and running)
  • About the range of smart speakers
  • Privacy and security
  • The Mycroft community and what people are building
  • Incentives and reasons to develop on Mycroft AI
  • Dev Chops with Dustin: a new feature where Dustin gets into the dev details of the Mycroft platform
  • Voice design techniques and processes
  • The future of voice

Links

]]>
This week, we’re joined by the Mycroft AI team, and we’re getting deep into designing and developing on the open source alternative to Amazon Alexa and Google Assistant.

If you’ve tried creating voice apps on platforms such as Amazon Alexa and Google Assistant, then you’ll do doubt be familiar with their current limitations. Push notifications, monetisation and all-round flexibility generally leave plenty to be desired.

What if there was an alternative? A platform that really did let you create whatever you wanted. Something that'll let you monetise. Something completely open to being used in a way that you want to use it.

Well, that’s what the team at Mycroft AI have built.




What is Mycroft AI?

Mycroft AI is the world’s first open source voice assistant that runs anywhere. On desktop, mobile, smart speakers. In cars, fridges, and washing machines. You name it. You can put it where you like and do with it what you like as well.

One member of the Mycroft community has hooked the platform up to a webcam and created a facial recognition feature that uses a persons face instead of a wake word. When you look at the camera, the speaker wakes and is ready for you to speak to it!

As well as being open source and flexible, if you create something exceptional, then it could even become the default skill for that feature on the platform. That’s like you creating a really great weather skill on Alexa and Amazon using that as the default way to tell people the weather!

Plus, your personal data is kept totally private.

And Mycroft aren’t just creating cool software, they have a range of smart speakers as well. The Mark I speaker is on sale now and the Mark II is on Indiegogo right now.




Our Guests

Today, we’re joined by Joshua Montgomery, CEO; Steve Penrod, CTO; and Derick Schweppe, CDO to talk all things Mycroft AI.

We’re also joined again by co-host, Dustin Coates, and we’re getting into detail about:

  • Where Mycroft AI came from and the company’s vision for voice and AI
  • The differences between Mycroft and the other players such as Alexa and Google Assistant
  • The value of an open source voice assistant
  • About the platform (how it works, how you can get up and running)
  • About the range of smart speakers
  • Privacy and security
  • The Mycroft community and what people are building
  • Incentives and reasons to develop on Mycroft AI
  • Dev Chops with Dustin: a new feature where Dustin gets into the dev details of the Mycroft platform
  • Voice design techniques and processes
  • The future of voice

Links

]]>
<![CDATA[How to create an Alexa Skill without coding with Vasili Shynkarenka]]> Mon, 05 Mar 2018 06:33:07 GMT 1:06:07 5a9ce4a45fc658720a3ccc9d no full But first, let's welcome co-host; Dustin Coates

We're joined in this episode by our new co-host, Dustin Coates. Dustin is the author of Voice Applications for Alexa and Google Assistant and has been involved in the voice scene since day 1. With extensive experience in software engineering, deep knowledge of Alexa and Google Assistant development and an immense passion for voice, Dustin brings a new perspective and different angles of questioning that, not only technical folk, but non-tech people will appreciate as well.


One of the challenges with new technology platforms is that you typically need to be able to speak the lingo to develop on them. As the internet has progressed, there are what seems like a million dev languages that you'd need to be able to code in to be able to create your website or app.


It wasn’t until relatively recently that tools cropped up to allow designers and total beginners to build on the web. Tools like Wordpress, Weebly and Squarespace have made it easy for anyone to create a presence online.


The great thing about having that history of the web is that we can learn from the past and apply the things that work well to new industries and technology. That’s exactly what Vasili has done through the creation of Storyline. It's the Weebly of voice.


It has a drag and drop interface and a user friendly workflow that will allow anyone to create an Alexa Skill without needing to code a single line.


It will let more technical folk do further work if they’d like to, such as using an API integration to interrogate data, but for the less technical folk out there, what you get ‘out the box’ is more than enough to build a well-rounded Skill.


In fact, testament to how much flexibility is baked into the tool is the recent announcement of the Amazon Alexa Skills Challenge: Kids winner, Kids Court, was created in Storyline.


In this episode, we get into detail about:

  • What Storyline is, how it works and how to get up and running
  • Testing and publishing Skills
  • How to make your Skill more discoverable
  • The Storyline community
  • Future features and the roadmap
  • The challenges facing developers and solutions to solving them
  • Vasili’s vision for where the voice space is heading
  • Advice for beginner Skill-builders and voice heads




Our guest

Vasili Shynkarenka is the founder and CEO of Storyline. After creating and selling an agency that specialised in creating conversational experiences for brands, Vasili turned his attention to focus on Storyline.


Vasili is madly passionate about voice and has immense experience in the field. He’s super-keen for all kinds of people to get involved in creating voice experiences, no matter what their skill level. His vision for the future of smart speakers and his knowledge of creating voice experiences are inspirational.


This episode is packed with insights and tips and tricks to help people of all skill levels create an Alexa Skill.




Links

]]>
But first, let's welcome co-host; Dustin Coates

We're joined in this episode by our new co-host, Dustin Coates. Dustin is the author of Voice Applications for Alexa and Google Assistant and has been involved in the voice scene since day 1. With extensive experience in software engineering, deep knowledge of Alexa and Google Assistant development and an immense passion for voice, Dustin brings a new perspective and different angles of questioning that, not only technical folk, but non-tech people will appreciate as well.


One of the challenges with new technology platforms is that you typically need to be able to speak the lingo to develop on them. As the internet has progressed, there are what seems like a million dev languages that you'd need to be able to code in to be able to create your website or app.


It wasn’t until relatively recently that tools cropped up to allow designers and total beginners to build on the web. Tools like Wordpress, Weebly and Squarespace have made it easy for anyone to create a presence online.


The great thing about having that history of the web is that we can learn from the past and apply the things that work well to new industries and technology. That’s exactly what Vasili has done through the creation of Storyline. It's the Weebly of voice.


It has a drag and drop interface and a user friendly workflow that will allow anyone to create an Alexa Skill without needing to code a single line.


It will let more technical folk do further work if they’d like to, such as using an API integration to interrogate data, but for the less technical folk out there, what you get ‘out the box’ is more than enough to build a well-rounded Skill.


In fact, testament to how much flexibility is baked into the tool is the recent announcement of the Amazon Alexa Skills Challenge: Kids winner, Kids Court, was created in Storyline.


In this episode, we get into detail about:

  • What Storyline is, how it works and how to get up and running
  • Testing and publishing Skills
  • How to make your Skill more discoverable
  • The Storyline community
  • Future features and the roadmap
  • The challenges facing developers and solutions to solving them
  • Vasili’s vision for where the voice space is heading
  • Advice for beginner Skill-builders and voice heads




Our guest

Vasili Shynkarenka is the founder and CEO of Storyline. After creating and selling an agency that specialised in creating conversational experiences for brands, Vasili turned his attention to focus on Storyline.


Vasili is madly passionate about voice and has immense experience in the field. He’s super-keen for all kinds of people to get involved in creating voice experiences, no matter what their skill level. His vision for the future of smart speakers and his knowledge of creating voice experiences are inspirational.


This episode is packed with insights and tips and tricks to help people of all skill levels create an Alexa Skill.




Links

]]>
<![CDATA[Cross-platform voice development with Jan König]]> Mon, 26 Feb 2018 08:55:57 GMT 57:27 5a93cb9da5f5bf0c738a8e38 no full 5 Find out all about the Jovo framework that lets you create Alexa Skills and Google Assistant apps at the same time, using the same code!


You know how you always need to write platform-specific code for everything? One lot of code for your iOS app, another load for Android and more for Windows (if you even bother). Well, the same challenges exist today when creating voice apps. Well, those challenges did exist, until Jovo came along.


With the Jovo framework, you can create an Alexa Skill and a Google Assistant app all from the same lot of code. It's part of Jovo's bigger mission to enable you to create multi-modal experiences with ease and to join together the sporadic tech outlets to create a unified experience across all devices and platforms.


Our Guest

Jan König is one of the co-founders of Jovo and we're speaking to him today about all things cross-platform voice development. We'll hear from Jan about things like:

  • what 'multi-modal' actually means
  • features of the Jovo framework
  • the Jovo community and Jovo Studios
  • the differences between developing for Alexa and Google Assistant
  • the challenges of developing voice experiences
  • the skills needed for building Skills
  • designer and developer relationships in the voice world
  • testing voice apps
  • Jovo 1.0 and the future of Jovo and


Links

]]>
Find out all about the Jovo framework that lets you create Alexa Skills and Google Assistant apps at the same time, using the same code!


You know how you always need to write platform-specific code for everything? One lot of code for your iOS app, another load for Android and more for Windows (if you even bother). Well, the same challenges exist today when creating voice apps. Well, those challenges did exist, until Jovo came along.


With the Jovo framework, you can create an Alexa Skill and a Google Assistant app all from the same lot of code. It's part of Jovo's bigger mission to enable you to create multi-modal experiences with ease and to join together the sporadic tech outlets to create a unified experience across all devices and platforms.


Our Guest

Jan König is one of the co-founders of Jovo and we're speaking to him today about all things cross-platform voice development. We'll hear from Jan about things like:

  • what 'multi-modal' actually means
  • features of the Jovo framework
  • the Jovo community and Jovo Studios
  • the differences between developing for Alexa and Google Assistant
  • the challenges of developing voice experiences
  • the skills needed for building Skills
  • designer and developer relationships in the voice world
  • testing voice apps
  • Jovo 1.0 and the future of Jovo and


Links

]]>
<![CDATA[All about conversational AI with Jeff Smith]]> Mon, 19 Feb 2018 05:17:44 GMT 1:06:55 5a8a5df81f6a2f4d6b91e83c no full 4 Conversational AI crops up constantly in conversations about voice, but what actually is it? How the heck does it work? And how can you use it? We speak to Jeff Smith to find out.


In this episode, we cover:


  • An overview of conversational AI - what it is and how it works
  • The role of voice in conversational AI
  • How and why brands should consider using it
  • How you can get started with machine learning and conversational AI
  • Challenges and opportunities such as the state of analytics and security


At the foot of the show, I said that this was:


“One of the most interesting conversations I’ve ever had in my life.”


And I wasn’t lying.


Getting to grips with Conversational AI

If you’re not familiar with the concepts of conversational AI, this episode will give you a great introduction.

If you are familiar and work in the industry, Jeff drops some great nuggets and learnings from his extensive experience.


And if you’re interested in this from a branding perspective, by the end of this episode, you’ll have a full understanding of the contexts and environments where it’s useful.


Our Guest

Jeff Smith, author of Reactive Machine Learning Systems, has bags of experience in the area of machine learning and conversational AI. He’s built a series of AIs, including Amy and Andrew at X.ai (what a cool domain!). That’s an AI Personal Assistant that helps people schedule meetings.


Jeff now works with IPsoft and manages the conversational AI team who’re building Amelia. Amelia, as you’ll find out in the show, is an extremely sophisticated AI that can perform many human tasks, increasing productivity and business efficiencies.


Links


]]>
Conversational AI crops up constantly in conversations about voice, but what actually is it? How the heck does it work? And how can you use it? We speak to Jeff Smith to find out.


In this episode, we cover:


  • An overview of conversational AI - what it is and how it works
  • The role of voice in conversational AI
  • How and why brands should consider using it
  • How you can get started with machine learning and conversational AI
  • Challenges and opportunities such as the state of analytics and security


At the foot of the show, I said that this was:


“One of the most interesting conversations I’ve ever had in my life.”


And I wasn’t lying.


Getting to grips with Conversational AI

If you’re not familiar with the concepts of conversational AI, this episode will give you a great introduction.

If you are familiar and work in the industry, Jeff drops some great nuggets and learnings from his extensive experience.


And if you’re interested in this from a branding perspective, by the end of this episode, you’ll have a full understanding of the contexts and environments where it’s useful.


Our Guest

Jeff Smith, author of Reactive Machine Learning Systems, has bags of experience in the area of machine learning and conversational AI. He’s built a series of AIs, including Amy and Andrew at X.ai (what a cool domain!). That’s an AI Personal Assistant that helps people schedule meetings.


Jeff now works with IPsoft and manages the conversational AI team who’re building Amelia. Amelia, as you’ll find out in the show, is an extremely sophisticated AI that can perform many human tasks, increasing productivity and business efficiencies.


Links


]]>
<![CDATA[How to build an Alexa Skill in Wordpress with Tom Harrigan]]> Mon, 12 Feb 2018 09:15:22 GMT 42:55 5a8158b623647cb01856ca0e no full 3 In this episode, we’re going to show you how you can build an Alexa Skill from right within Wordpress.

Wordpress powers almost a third of the internet and now millions of websites running Wordpress can all have a presence on voice. It’s all thanks to VoiceWP, the Wordpress plugin that lets you build an Alexa Skill from within the most widely adopted CMS on the planet.


You can create Flash Briefings with ease and even have Alexa read the content of your website. We all know about audio books, but this could be the first opportunity to have your website content turned into audio form and read aloud as soon as its published, without you having to go through much effort at all. It’s super simple to set up.




Our Guest


VoiceWP was built by our guest, Tom Harrigan, Partner and VP of Strategic Technology at Alley Interactive, a full service digital agency that specialises in helping publishers succeed online. We speak to Tom about VoiceWP, which is allowing brands such as People.com and Dow Jones’ Moneyish.com build Alexa Skills and establish a presence on voice with ease.


And you can use it too, because it’s free and super-simple to set up.


So, if you use Wordpress as your CMS and you’re interested in testing the waters in voice, or if you’re looking for a starting point Alexa Skill building, then this episode is for you.


We’re speaking to Tom about:


  • Where the idea for VoiceWP came from and how it was built
  • What is the plugin all about and what features does it have
  • Who’s using it right now and who is it targeted at
  • How can you get up and running with the plugin and try it out for yourself
  • What does the future look like and what’s coming up




Links

]]>
In this episode, we’re going to show you how you can build an Alexa Skill from right within Wordpress.

Wordpress powers almost a third of the internet and now millions of websites running Wordpress can all have a presence on voice. It’s all thanks to VoiceWP, the Wordpress plugin that lets you build an Alexa Skill from within the most widely adopted CMS on the planet.


You can create Flash Briefings with ease and even have Alexa read the content of your website. We all know about audio books, but this could be the first opportunity to have your website content turned into audio form and read aloud as soon as its published, without you having to go through much effort at all. It’s super simple to set up.




Our Guest


VoiceWP was built by our guest, Tom Harrigan, Partner and VP of Strategic Technology at Alley Interactive, a full service digital agency that specialises in helping publishers succeed online. We speak to Tom about VoiceWP, which is allowing brands such as People.com and Dow Jones’ Moneyish.com build Alexa Skills and establish a presence on voice with ease.


And you can use it too, because it’s free and super-simple to set up.


So, if you use Wordpress as your CMS and you’re interested in testing the waters in voice, or if you’re looking for a starting point Alexa Skill building, then this episode is for you.


We’re speaking to Tom about:


  • Where the idea for VoiceWP came from and how it was built
  • What is the plugin all about and what features does it have
  • Who’s using it right now and who is it targeted at
  • How can you get up and running with the plugin and try it out for yourself
  • What does the future look like and what’s coming up




Links

]]>
<![CDATA[Voice-first user testing with Sam Howard]]> Mon, 12 Feb 2018 08:54:15 GMT 53:06 5a8154beac34577e1adf2626 no full 2 In this episode, we're talking about voice first user testing, why it's so imperative and how you can get started doing your own voice user testing.


Why voice first user testing?


Although usability testing graphical user interfaces is as common as a trending tweet, it's a seed that’s yet to be greatly sewn in the world of voice. There are many services that will provide technical testing, but those specifically offering voice first user testing in person with real users are few and far between. Enter, Userfy.


Whether you create Alexa Skills, Google Actions or any other voice user experience, this episode will help you make sure that your voice user interface (VUI) works for the people that use it through teaching you how to approach a voice-based user testing project.


We’ll cover things like:


  • The current state of user research in the voice industry
  • Why is usability testing important?
  • What kind of users should you test with?
  • User testing processes and planning
  • How to approach a voice-first testing project
  • Validating assumptions
  • The difference between graphical and voice user testing
  • What tools and equipment you need


Introducing Sam Howard


Our guest is Sam Howard, co-founder and Director of user research agency, Userfy, which specialises in user testing. Sam has a PhD in Human-Computer Interaction and a degree in Psychology. That, mixed with a love of technology and a passion for helping people, puts Sam at the forefront of the user research field.


Links:

Sam Howard on Twitter

Userfy website

Userfy on Twitter

Sam's 'Usability challenges facing voice-first devices' article

]]>
In this episode, we're talking about voice first user testing, why it's so imperative and how you can get started doing your own voice user testing.


Why voice first user testing?


Although usability testing graphical user interfaces is as common as a trending tweet, it's a seed that’s yet to be greatly sewn in the world of voice. There are many services that will provide technical testing, but those specifically offering voice first user testing in person with real users are few and far between. Enter, Userfy.


Whether you create Alexa Skills, Google Actions or any other voice user experience, this episode will help you make sure that your voice user interface (VUI) works for the people that use it through teaching you how to approach a voice-based user testing project.


We’ll cover things like:


  • The current state of user research in the voice industry
  • Why is usability testing important?
  • What kind of users should you test with?
  • User testing processes and planning
  • How to approach a voice-first testing project
  • Validating assumptions
  • The difference between graphical and voice user testing
  • What tools and equipment you need


Introducing Sam Howard


Our guest is Sam Howard, co-founder and Director of user research agency, Userfy, which specialises in user testing. Sam has a PhD in Human-Computer Interaction and a degree in Psychology. That, mixed with a love of technology and a passion for helping people, puts Sam at the forefront of the user research field.


Links:

Sam Howard on Twitter

Userfy website

Userfy on Twitter

Sam's 'Usability challenges facing voice-first devices' article

]]>
<![CDATA[Welcome to VUX World with Kane Simms]]> Mon, 12 Feb 2018 07:51:08 GMT 16:07 5a81476dac34577e1adf2625 no trailer 1 Ladies and gentlemen, boys and girls, welcome to VUX World.


This introductory episode is all about what VUX World is all about. Here, I'll take you through:


  • the aims of the show
  • how we intend to meet those aims
  • why it exists
  • who would find it useful
  • what's in store over the coming months


The aims of VUX World


This is an ambitious show that intends to cover three core aims for three primary groups of people:


  1. To help VUX pros create better voice experiences through bringing together people from throughout the industry to share insights, tools, tips and tricks
  2. To help brands create voice first strategies and implement voice first solutions through learning from companies and agencies who're doing it right now
  3. To help grow the VUX industry by introducing people such as creatives, scientists, technologists, strategists, linguistics, developers and anyone else to the voice first world


How we'll meet our aims


We'll reach those aims through focusing on three core pillars of content.


  • Why? We'll cover the 'why' aspect of the argument for voice. Why should you take this area seriously? Why develop your skills here? Why voice?
  • How? We'll extensively cover the 'how' side of things, too. How can you get started? How does the voice industry work? How can you develop here? We'll cover things like tutorials, guides, tips, hints and tactics to help you learn, develop and grow to create epic voice experiences.
  • What's stopping you? Every industry has its challenges. We want to delve into those challenges and uncover opportunities to push past the barriers and find opportunities to move forward.


The host of VUX World


Your host for this journey is me, Kane Simms. I have a history in sound design and music production as well as extensive experience in marketing, UX and agile project management. My love for all things audio and passion for understanding user behaviour and technology culminate perfectly right here in the world of voice.

So, strap in, hold tight and brace yourself for the rapidly expanding world of voice. I'm glad to be your guide.

Now, without further ado, you should totally check out the first proper episode of the podcast: User testing on voice-first devices with Sam Howard.


Enjoy :)

]]>
Ladies and gentlemen, boys and girls, welcome to VUX World.


This introductory episode is all about what VUX World is all about. Here, I'll take you through:


  • the aims of the show
  • how we intend to meet those aims
  • why it exists
  • who would find it useful
  • what's in store over the coming months


The aims of VUX World


This is an ambitious show that intends to cover three core aims for three primary groups of people:


  1. To help VUX pros create better voice experiences through bringing together people from throughout the industry to share insights, tools, tips and tricks
  2. To help brands create voice first strategies and implement voice first solutions through learning from companies and agencies who're doing it right now
  3. To help grow the VUX industry by introducing people such as creatives, scientists, technologists, strategists, linguistics, developers and anyone else to the voice first world


How we'll meet our aims


We'll reach those aims through focusing on three core pillars of content.


  • Why? We'll cover the 'why' aspect of the argument for voice. Why should you take this area seriously? Why develop your skills here? Why voice?
  • How? We'll extensively cover the 'how' side of things, too. How can you get started? How does the voice industry work? How can you develop here? We'll cover things like tutorials, guides, tips, hints and tactics to help you learn, develop and grow to create epic voice experiences.
  • What's stopping you? Every industry has its challenges. We want to delve into those challenges and uncover opportunities to push past the barriers and find opportunities to move forward.


The host of VUX World


Your host for this journey is me, Kane Simms. I have a history in sound design and music production as well as extensive experience in marketing, UX and agile project management. My love for all things audio and passion for understanding user behaviour and technology culminate perfectly right here in the world of voice.

So, strap in, hold tight and brace yourself for the rapidly expanding world of voice. I'm glad to be your guide.

Now, without further ado, you should totally check out the first proper episode of the podcast: User testing on voice-first devices with Sam Howard.


Enjoy :)

]]>