60 pippa.io VUX World https://vux.world en VUX World VUX, Voice User Experience, voice user experience design, VUI, CUI, voice first, voice design, voice user interface design, voice user interface, voice ux, alexa, google home, google assistant, alexa skills, google actions, cortana, siri Kane Simms THE voice user experience and strategy podcast yes Kane Simms info+5a8145b3ac34577e1adf2624@mg.pippa.io episodic https://assets.pippa.io/shows/5a8145b3ac34577e1adf2624/1518421471285-6cb3dbc632c19407c1baaf8c9abffb0d.jpeg https://vux.world VUX World VUI design best practice from user testing with 120 brands, with Abhishek Suthan and Dylan Zwick VUI design best practice from user testing with 120 brands, with Abhishek Suthan and Dylan Zwick Mon, 15 Oct 2018 04:06:00 GMT 56:26 5bc36a66f8aac569453f5733 no https://vux.world/vui-design-best-practice full Pulse Labs founders, Abhishek Suthan and Dylan Zwick share their advice on VUI design best practice that they've learned from conducting voice first usability testing with over 120 brands.


Where to listen

Apple podcasts

Spotify

YouTube

CastBox

Spreaker

TuneIn

Breaker

Stitcher

PlayerFM

iHeartRadio


The search for VUI design best practice

In web design, there are standards. Common design patterns and best practice that you'll find on most websites and apps.

The burger menu, call to action buttons, a search bar at the top of the page. These have all been tried and tested and are par for the course on most websites.

In voice, that best practice is still to be worked out. And today's guests have begun to uncover it.

Pulse Labs is a voice first usability testing company. They conduct global remote user research by testing voice experiences for brands. Think of it almost like usertesting.com, but specifically for voice.

After working with over 120 brands, the founders; Abhishek Suthan and Dylan Zwick, have stumbled upon some of the most common mistakes that designers and developers make in their Google Assistant Actions and Alexa Skills.

Through design iterations and further testing, they've worked out what some of that best practice looks like.


In this episode

Over the course of this episode, we hear from Abhishek and Dylan about some of the most common mistakes designers make when it comes to voice user experience design.

We discuss how these issues can be fixed, as well as further best practice when designing for voice, including:

  • How to architect your voice app and design flat menus
  • How to handle errors and recover from failure
  • Framing experiences and handling expectations
  • When to apply confirmations and when to make assumptions
  • And a whole host more

This episode is one to listen to again and again. No doubt the standards will change as and when the tech advances and usage grows, but for now, this is probably the best start there is in defining best practice in voice.


Links

Visit the Pulse Labs website

Email Dylan Zwick

Follow Pulse Labs on Twitter

Follow Dylan on Twitter

Follow Pulse Labs on Facebook

Follow Pulse Labs on LinkedIn

 

]]>
Pulse Labs founders, Abhishek Suthan and Dylan Zwick share their advice on VUI design best practice that they've learned from conducting voice first usability testing with over 120 brands.


Where to listen

Apple podcasts

Spotify

YouTube

CastBox

Spreaker

TuneIn

Breaker

Stitcher

PlayerFM

iHeartRadio


The search for VUI design best practice

In web design, there are standards. Common design patterns and best practice that you'll find on most websites and apps.

The burger menu, call to action buttons, a search bar at the top of the page. These have all been tried and tested and are par for the course on most websites.

In voice, that best practice is still to be worked out. And today's guests have begun to uncover it.

Pulse Labs is a voice first usability testing company. They conduct global remote user research by testing voice experiences for brands. Think of it almost like usertesting.com, but specifically for voice.

After working with over 120 brands, the founders; Abhishek Suthan and Dylan Zwick, have stumbled upon some of the most common mistakes that designers and developers make in their Google Assistant Actions and Alexa Skills.

Through design iterations and further testing, they've worked out what some of that best practice looks like.


In this episode

Over the course of this episode, we hear from Abhishek and Dylan about some of the most common mistakes designers make when it comes to voice user experience design.

We discuss how these issues can be fixed, as well as further best practice when designing for voice, including:

  • How to architect your voice app and design flat menus
  • How to handle errors and recover from failure
  • Framing experiences and handling expectations
  • When to apply confirmations and when to make assumptions
  • And a whole host more

This episode is one to listen to again and again. No doubt the standards will change as and when the tech advances and usage grows, but for now, this is probably the best start there is in defining best practice in voice.


Links

Visit the Pulse Labs website

Email Dylan Zwick

Follow Pulse Labs on Twitter

Follow Dylan on Twitter

Follow Pulse Labs on Facebook

Follow Pulse Labs on LinkedIn

 

]]>
Voice first social networks with Daniel Gonzalez Voice first social networks with Daniel Gonzalez Mon, 08 Oct 2018 06:34:18 GMT 1:08:47 5bbaf956645c32ab21b12880 no https://vux.world/voice-first-social-networks A deep dive into voice first social networks and messaging, exploring whether these platforms are poised for success or doomed to fail. full This week, we take a deep dive into voice first social networks and messaging, as we explore whether these platforms are poised for success or doomed to fail. We also discuss some of the challenges in building a voice first product, including the limitations of the tech stack and how VUI design is a way of compensating for this.

To take us through the world of voice first social, we're joined by Daniel Gonzalez, co-founder of voice first messaging platform, SoundBite.




Where to listen

Apple podcasts

Spotify

YouTube

CastBox

Spreaker

TuneIn

Breaker

Stitcher

PlayerFM

iHeartRadio




In this episode, we discuss:

  • The current state of play in social voice, how most voice first social platforms are using an old social media model and how SoundBite differs.
  • Design challenges in designing social voice platforms, multi modal implications and perfecting a narrow use-case.
  • Details of the inherent technology challenges built into today's voice assistants and how to compensate for it with VUI design.
  • The future of the voice assistant technology landscape and how SoundBite are working towards it, including using Digital Signal Processing (DSP) and acoustic modelling instead of Automatic Speech Recognition (ASR) and Natural Language Processing (NLP).

Links

SoundBite website

Follow Daniel on Twitter

Email Daniel

]]>
This week, we take a deep dive into voice first social networks and messaging, as we explore whether these platforms are poised for success or doomed to fail. We also discuss some of the challenges in building a voice first product, including the limitations of the tech stack and how VUI design is a way of compensating for this.

To take us through the world of voice first social, we're joined by Daniel Gonzalez, co-founder of voice first messaging platform, SoundBite.




Where to listen

Apple podcasts

Spotify

YouTube

CastBox

Spreaker

TuneIn

Breaker

Stitcher

PlayerFM

iHeartRadio




In this episode, we discuss:

  • The current state of play in social voice, how most voice first social platforms are using an old social media model and how SoundBite differs.
  • Design challenges in designing social voice platforms, multi modal implications and perfecting a narrow use-case.
  • Details of the inherent technology challenges built into today's voice assistants and how to compensate for it with VUI design.
  • The future of the voice assistant technology landscape and how SoundBite are working towards it, including using Digital Signal Processing (DSP) and acoustic modelling instead of Automatic Speech Recognition (ASR) and Natural Language Processing (NLP).

Links

SoundBite website

Follow Daniel on Twitter

Email Daniel

]]>
Voice first digital transformation with Shawn Kanungo Voice first digital transformation with Shawn Kanungo Mon, 01 Oct 2018 03:53:00 GMT 1:01:06 5bb0f22904e5230a73cca247 no https://vux.world/voice-first-digital-transformation Digging into how to use voice as part of a digital transformation strategy. full Voice first technology has the potential to transform organisations. Join Dustin and I as we dig into how voice is being used to create efficiencies within businesses with Silver founder, Shawn Kanungo.

Silver, an agency based in Canada, is helping organisations use voice to streamline business processes, access line of business systems and improve productivity. We speak to the founder, ex-Deloitte digital transformation guru and speaker, Shawn Kanungo, to find out how it's done.

In this episode, we discuss:

  • How voice plus other exponential technologies will disrupt every industry & government agencies
  • What voice looks like when it's combined with robotic process automation (RPA) and more
  • What does voice mean for a digital transformation strategy for an enterprise?
  • How Silver take a human-centered approach to voice by doing ethnographic research
  • Organisational culture and whether workers are ready for enterprise level voice
  • The future of voice and whether we'll see a billion dollar company built on a voice platform

Where to listen

Apple podcasts

Spotify

YouTube

CastBox

Spreaker

TuneIn

Breaker

Stitcher

PlayerFM

iHeartRadio


Links

https://silverdrip.com

https://www.shawnkanungo.com/

Silver on Instagram

Silver on Twitter

Silver on Facebook

]]>
Voice first technology has the potential to transform organisations. Join Dustin and I as we dig into how voice is being used to create efficiencies within businesses with Silver founder, Shawn Kanungo.

Silver, an agency based in Canada, is helping organisations use voice to streamline business processes, access line of business systems and improve productivity. We speak to the founder, ex-Deloitte digital transformation guru and speaker, Shawn Kanungo, to find out how it's done.

In this episode, we discuss:

  • How voice plus other exponential technologies will disrupt every industry & government agencies
  • What voice looks like when it's combined with robotic process automation (RPA) and more
  • What does voice mean for a digital transformation strategy for an enterprise?
  • How Silver take a human-centered approach to voice by doing ethnographic research
  • Organisational culture and whether workers are ready for enterprise level voice
  • The future of voice and whether we'll see a billion dollar company built on a voice platform

Where to listen

Apple podcasts

Spotify

YouTube

CastBox

Spreaker

TuneIn

Breaker

Stitcher

PlayerFM

iHeartRadio


Links

https://silverdrip.com

https://www.shawnkanungo.com/

Silver on Instagram

Silver on Twitter

Silver on Facebook

]]>
The Rundown 002: Big news from Alexa as Google Home Mini becomes top selling smart speaker... and more The Rundown 002: Big news from Alexa as Google Home Mini becomes top selling smart speaker... and more Tue, 25 Sep 2018 06:32:26 GMT 41:10 5ba9d67ac712e068266995d6 no https://vux.world/alexas-new-hardware-google-home-top-smart-speaker full It's been a busy few weeks with both of the top two voice assistant platforms announcing new devices and software improvements, but what does it all mean for brands, designers and developers?


Google Home Mini becomes top selling smart speaker

That's right, the Google Home Mini smart speaker outsold all other smart speakers in Q2.

Google's intense advertising over the summer months looks like it could be starting to pay off. It still isn't the market leader. Amazon still holds that spot, for now.


Takeaway:

At the beginning of this year, Google Assistant was a nice-to-have feature in your voice strategy. Google's progress over the summer and the recent sales of the Google Home Mini now mean that obtaining a presence on Google Assistant is unavoidable for brands looking to make serious play in this space.

We discuss whether you should use a tool like Jovo for developing cross-platform voice experiencesor whether you should build natively.


Dustin's pro tip:

If you need access to new feature updates as and when they're released, you should build natively. If you're happy to wait, use something like Jovo.

Google rumoured to be launching the Google Home Hub

It's rumoured that Google will be releasing a smart display to rival the Amazon Echo Show.

In the podcast, we said that this will go on sale in October. That's not the case. The actual sale date hasn't been announced yet.


Takeaway:

With more voice assistants bringing screens into the equation, designing and developing multi modal experiences is going to be an increasing area of opportunity over the next year.


Google becomes multi-lingual

Google announced multi-lingual support for Google Assistant. That means that you can speak to the Assistant in a different language and have it respond back to you in that language without having to change the language settings. This is a great feature for households that speak more than one language.


Takeaway:

Although this might not be widely used initially, this is a great step forward in providing a frictionless user experience for those who speak more than one language. For brands, this brings the necessity to internationalise your voice experiences closer to home.

Check out the podcast we did with Maaike Dufour to learn more about how to transcreate and internationalise your voice experience.


Amazon announces about a million Alexa devices

Amazon announced a whole host of Alexa enabled deviceslast week, including:

  • Echo Dot V2 and Echo Plus V2
  • A new Echo Show (with a 10 inch screen)
  • Echo Auto (for the car)
  • Echo Sub (a subwoofer)
  • Fire TV Recast (a TV set top box)
  • An Alexa-injected microwave
  • A clock, with Alexa built in
  • Echo Input (turns any speaker into a smart speaker)
  • A Ring security camera
  • A smart plug
  • An amp

Takeaway:

These new devices, whether they succeed or fail, present opportunities for brands, designers and developers in that they provide an insight into a user's context. That can help you shape an experience based around that context.

For example, you can now target commuters with long form audio through Alexa while they're driving. You can provide micro engagement through Alexa while your customer is cooking their rice.

This could be the beginnings of the 'Alexa Everywhere' movement, which will be laden with opportunities for those who seek to understand where users are and what they're seeking to achieve at that time.


Alexa Presentation Language

The Alexa Presentation Languageallows you to design and develop custom visuals to enhance your user's screen-accompanying Alexa experience.

Until now, if you wanted to serve visuals on an Echo Spot or Echo Show, you'd have to use one of 7 design templates. This announcement means that you can create your own designs and even do things like sync visual transitions with audio and, in future, there'll be support for video and HTML 5.


Takeaway:

As with many of the items in this week's Rundown, there's an increasing emphasis on multi-modal experiences. Over the next year or so, expect more voice + screen devices. This will mean that you'll need to start thinking about how you can add value through visuals as part of your offering.


Kane's pro tip:

Even though there are more options for voice + screen, still focus on creating voice-first experiences. Don't let the screen take over. Lead with voice and supplement or enhance with visuals.

Alexa smart screen and TV device SDK

This announcementenables device manufacturers to create hardware with a screen that runs Alexa. For example, Amazon will announce the details of how Sony have used the SDK to add Alexa capability to their TVs.


Takeaway:

For hardware brands, you can now add Alexa to your products. For the rest of us, watch this space. This is yet further evidence to suggest that voice + screen experiences are going to be something users come to expect in future.


Introducing the Alexa Connect Kit (ACK)

ACK allows device manufacturers to add Alexa to their hardwarewithout having to worry about creating a skill or managing cloud services or security.

Essentially, you can add an ACK module to your device, connect it to your micro controller and hey presto, you have an Alexa enabled device.

It's the same thing Amazon used to build their new microwave.


Takeaway:

Another opportunity for hardware brands to add value to your product line and another signal that Alexa will potentially be spreading further and wider. If you haven't thought about how this might impact your business and the opportunities you might find in future, this is a good time to start that thought process.


Two final Alexa announcements:

Whisper mode, which enables a user to whisper at Alexa and it'll whisper back.

Hunch, which is Alexa's first move to become proactive in suggesting things you might want to do based on previous behaviour.


Takeaway:

In unclear whether either of these things require developers to markup their skills for this in any way or whether Alexa will take care of everything for you.


Finally, Bixby

Bixby will be opening up for public Beta in November after a few months in private beta.

There was a webinar this week, exclusive to the private beta members, which included a host of announcements. I'm still trying to get hold of the webinar or someone who can shed some light on it and we'll try and bring you further news on this on the next Rundown.

]]>
It's been a busy few weeks with both of the top two voice assistant platforms announcing new devices and software improvements, but what does it all mean for brands, designers and developers?


Google Home Mini becomes top selling smart speaker

That's right, the Google Home Mini smart speaker outsold all other smart speakers in Q2.

Google's intense advertising over the summer months looks like it could be starting to pay off. It still isn't the market leader. Amazon still holds that spot, for now.


Takeaway:

At the beginning of this year, Google Assistant was a nice-to-have feature in your voice strategy. Google's progress over the summer and the recent sales of the Google Home Mini now mean that obtaining a presence on Google Assistant is unavoidable for brands looking to make serious play in this space.

We discuss whether you should use a tool like Jovo for developing cross-platform voice experiencesor whether you should build natively.


Dustin's pro tip:

If you need access to new feature updates as and when they're released, you should build natively. If you're happy to wait, use something like Jovo.

Google rumoured to be launching the Google Home Hub

It's rumoured that Google will be releasing a smart display to rival the Amazon Echo Show.

In the podcast, we said that this will go on sale in October. That's not the case. The actual sale date hasn't been announced yet.


Takeaway:

With more voice assistants bringing screens into the equation, designing and developing multi modal experiences is going to be an increasing area of opportunity over the next year.


Google becomes multi-lingual

Google announced multi-lingual support for Google Assistant. That means that you can speak to the Assistant in a different language and have it respond back to you in that language without having to change the language settings. This is a great feature for households that speak more than one language.


Takeaway:

Although this might not be widely used initially, this is a great step forward in providing a frictionless user experience for those who speak more than one language. For brands, this brings the necessity to internationalise your voice experiences closer to home.

Check out the podcast we did with Maaike Dufour to learn more about how to transcreate and internationalise your voice experience.


Amazon announces about a million Alexa devices

Amazon announced a whole host of Alexa enabled deviceslast week, including:

  • Echo Dot V2 and Echo Plus V2
  • A new Echo Show (with a 10 inch screen)
  • Echo Auto (for the car)
  • Echo Sub (a subwoofer)
  • Fire TV Recast (a TV set top box)
  • An Alexa-injected microwave
  • A clock, with Alexa built in
  • Echo Input (turns any speaker into a smart speaker)
  • A Ring security camera
  • A smart plug
  • An amp

Takeaway:

These new devices, whether they succeed or fail, present opportunities for brands, designers and developers in that they provide an insight into a user's context. That can help you shape an experience based around that context.

For example, you can now target commuters with long form audio through Alexa while they're driving. You can provide micro engagement through Alexa while your customer is cooking their rice.

This could be the beginnings of the 'Alexa Everywhere' movement, which will be laden with opportunities for those who seek to understand where users are and what they're seeking to achieve at that time.


Alexa Presentation Language

The Alexa Presentation Languageallows you to design and develop custom visuals to enhance your user's screen-accompanying Alexa experience.

Until now, if you wanted to serve visuals on an Echo Spot or Echo Show, you'd have to use one of 7 design templates. This announcement means that you can create your own designs and even do things like sync visual transitions with audio and, in future, there'll be support for video and HTML 5.


Takeaway:

As with many of the items in this week's Rundown, there's an increasing emphasis on multi-modal experiences. Over the next year or so, expect more voice + screen devices. This will mean that you'll need to start thinking about how you can add value through visuals as part of your offering.


Kane's pro tip:

Even though there are more options for voice + screen, still focus on creating voice-first experiences. Don't let the screen take over. Lead with voice and supplement or enhance with visuals.

Alexa smart screen and TV device SDK

This announcementenables device manufacturers to create hardware with a screen that runs Alexa. For example, Amazon will announce the details of how Sony have used the SDK to add Alexa capability to their TVs.


Takeaway:

For hardware brands, you can now add Alexa to your products. For the rest of us, watch this space. This is yet further evidence to suggest that voice + screen experiences are going to be something users come to expect in future.


Introducing the Alexa Connect Kit (ACK)

ACK allows device manufacturers to add Alexa to their hardwarewithout having to worry about creating a skill or managing cloud services or security.

Essentially, you can add an ACK module to your device, connect it to your micro controller and hey presto, you have an Alexa enabled device.

It's the same thing Amazon used to build their new microwave.


Takeaway:

Another opportunity for hardware brands to add value to your product line and another signal that Alexa will potentially be spreading further and wider. If you haven't thought about how this might impact your business and the opportunities you might find in future, this is a good time to start that thought process.


Two final Alexa announcements:

Whisper mode, which enables a user to whisper at Alexa and it'll whisper back.

Hunch, which is Alexa's first move to become proactive in suggesting things you might want to do based on previous behaviour.


Takeaway:

In unclear whether either of these things require developers to markup their skills for this in any way or whether Alexa will take care of everything for you.


Finally, Bixby

Bixby will be opening up for public Beta in November after a few months in private beta.

There was a webinar this week, exclusive to the private beta members, which included a host of announcements. I'm still trying to get hold of the webinar or someone who can shed some light on it and we'll try and bring you further news on this on the next Rundown.

]]>
All about Snips with Yann Lachelle All about Snips with Yann Lachelle Mon, 24 Sep 2018 04:04:00 GMT 1:00:35 5ba7dafaee37b5354764b1db no https://vux.world/snips Find out about how you can use the privacy-by-design voice assistant as an alternative to Alexa and Google Assistant full 33 This week, we're speaking to serial entrepreneur, Yann Lachelle, COO at Snips, about the privacy by design alternative to Google Assistant and Amazon Alexa.

Privacy is a hot topic. With the Cambridge Analytica scandal and the introduction of GDPR in Europe, people are becoming more aware and more concerned with how companies are using their data.

On the enterprise-side, one of the challenges preventing companies from implementing voice is the apprehension towards sending sensitive data to Amazon or Google.


Enter, Snips

The Paris-based startup is bringing a privacy-first approach to their voice assistant. We speak to Snips' COO Yann Lachelle about the details and how you can use it

In this episodes, we discuss:

  • What Snips is and its position in the market
  • Why privacy is a concern for consumers and companies
  • Snips' approach to voice and privacy
  • Edge computing and how Snips is tackling security
  • Open sourcing the backend of the Snips assistant
  • Blockchain and decentralising the voice ecosystem

Our guest

Yann Lachelle is a serial entrepreneur. He's founded and sold several companies and has a 100% record of founding and exiting. Yann's experience in the startup world is vast and his knowledge on AI and the voice industry is more than impressive.

As COO of Snips, Yann is helping Snips make technology disappear by bringing to market the world's first privacy-by-design voice assistant.

Yann brings us some inspiring stories, intensely relevant insights and plenty of observations that'll help you get a full understanding of what Snips can offer you or your clients.


Where to listen

Apple podcasts

Spotify

YouTube

CastBox

Spreaker

TuneIn

Breaker

Stitcher

PlayerFM

iHeartRadio


Links

Visit the Snips website

Try Snips for developers

Join the Snips community on Discord

Check out Snips' whitepaper explaining the details of their blockchain ambitions

Find out more about Snips and blockchain

 

]]>
This week, we're speaking to serial entrepreneur, Yann Lachelle, COO at Snips, about the privacy by design alternative to Google Assistant and Amazon Alexa.

Privacy is a hot topic. With the Cambridge Analytica scandal and the introduction of GDPR in Europe, people are becoming more aware and more concerned with how companies are using their data.

On the enterprise-side, one of the challenges preventing companies from implementing voice is the apprehension towards sending sensitive data to Amazon or Google.


Enter, Snips

The Paris-based startup is bringing a privacy-first approach to their voice assistant. We speak to Snips' COO Yann Lachelle about the details and how you can use it

In this episodes, we discuss:

  • What Snips is and its position in the market
  • Why privacy is a concern for consumers and companies
  • Snips' approach to voice and privacy
  • Edge computing and how Snips is tackling security
  • Open sourcing the backend of the Snips assistant
  • Blockchain and decentralising the voice ecosystem

Our guest

Yann Lachelle is a serial entrepreneur. He's founded and sold several companies and has a 100% record of founding and exiting. Yann's experience in the startup world is vast and his knowledge on AI and the voice industry is more than impressive.

As COO of Snips, Yann is helping Snips make technology disappear by bringing to market the world's first privacy-by-design voice assistant.

Yann brings us some inspiring stories, intensely relevant insights and plenty of observations that'll help you get a full understanding of what Snips can offer you or your clients.


Where to listen

Apple podcasts

Spotify

YouTube

CastBox

Spreaker

TuneIn

Breaker

Stitcher

PlayerFM

iHeartRadio


Links

Visit the Snips website

Try Snips for developers

Join the Snips community on Discord

Check out Snips' whitepaper explaining the details of their blockchain ambitions

Find out more about Snips and blockchain

 

]]>
<![CDATA[All about voice testing with Bespoken's John Kelvie]]> Mon, 17 Sep 2018 05:57:19 GMT 1:11:44 5b9ea98fba60efd5348ede81 no https://vux.world/all-about-voice-testing John talks us through three core testing types and how you can use them to improve your voice experience. full This week, Dustin and I catch up with John Kelvie, CEO and founder of Bespoken, and learn all about the three types of testing that can help you create and sustain great voice experiences.

We discuss:

  • Unit testing:how to test your code locally without having to deploy into the cloud and test through your smart speaker or phone. This can save developers a whole load of time and effort in the development phase.
  • End to end testing:how to automate testing of utterances and intents to make sure you're returning the correct response to the various utterances that can be fed through your skill or action. This saves the QA folks time as you no longer need to fire up your skill or action and physically test every possible utterance.
  • Continuous testing:making sure that your continue to keep on top of the ever-changing AI operating systems and ensuring your skill or action is always operating as intended.

We also discuss the convergence of usability testing and technical testing and how they can play together, as well as hear John's take on the future of voice.




Where to listen

Apple podcasts

Spotify

YouTube

CastBox

Spreaker

TuneIn

Breaker

Stitcher

PlayerFM

iHeartRadio




Links

https://bespoken.io

Bespoken on twitter

Check out Bespoken's webinars

]]>
This week, Dustin and I catch up with John Kelvie, CEO and founder of Bespoken, and learn all about the three types of testing that can help you create and sustain great voice experiences.

We discuss:

  • Unit testing:how to test your code locally without having to deploy into the cloud and test through your smart speaker or phone. This can save developers a whole load of time and effort in the development phase.
  • End to end testing:how to automate testing of utterances and intents to make sure you're returning the correct response to the various utterances that can be fed through your skill or action. This saves the QA folks time as you no longer need to fire up your skill or action and physically test every possible utterance.
  • Continuous testing:making sure that your continue to keep on top of the ever-changing AI operating systems and ensuring your skill or action is always operating as intended.

We also discuss the convergence of usability testing and technical testing and how they can play together, as well as hear John's take on the future of voice.




Where to listen

Apple podcasts

Spotify

YouTube

CastBox

Spreaker

TuneIn

Breaker

Stitcher

PlayerFM

iHeartRadio




Links

https://bespoken.io

Bespoken on twitter

Check out Bespoken's webinars

]]>
Netflix 13 Reasons Why does voice + video with Tony Lizza Netflix 13 Reasons Why does voice + video with Tony Lizza Mon, 03 Sep 2018 09:40:29 GMT 38:42 5b8d0131cdc524d711ed787a no https://vux.world/13-reasons-why Working with Netflix to implement a voice + video experience on mobile full 33 Tony Lizza, Project Manager at Apollo Matrix, shares his battle scars from working on the technical implementation of the Netflix 13 Reasons Why interactive cinema experience.

This was a voice and video experience that was deployed through the mobile web browser and was used to promote Netflix's biggest show, 13 Reasons Why.

Dustin Coates and I talk to Tony all about the creation of the experience and the technical challenges Tony and his team faced in implementing something so bleeding-edge, including taking advantage of new APIs that allow developers to access a user's mic and video from within a web browser on mobile and how to handle a lack of that functionality within the walled gardens of social media.

We discuss using a fallback touch-based experience, the surprising results of user testing, as well as the technical details of how to do speech to text from within a browser and plenty more.

Here's a promo video for the experience that gives you a flavour:



13 Reasons Why - Talk to the Reasons - Netflixfrom Moth + Flameon Vimeo.






Where to listen

Apple podcasts

Spotify

YouTube

CastBox

Spreaker

TuneIn

Breaker

Stitcher

PlayerFM

iHeartRadio






Links

www.talktothereasons.com

apollomatrix.com

]]>
Tony Lizza, Project Manager at Apollo Matrix, shares his battle scars from working on the technical implementation of the Netflix 13 Reasons Why interactive cinema experience.

This was a voice and video experience that was deployed through the mobile web browser and was used to promote Netflix's biggest show, 13 Reasons Why.

Dustin Coates and I talk to Tony all about the creation of the experience and the technical challenges Tony and his team faced in implementing something so bleeding-edge, including taking advantage of new APIs that allow developers to access a user's mic and video from within a web browser on mobile and how to handle a lack of that functionality within the walled gardens of social media.

We discuss using a fallback touch-based experience, the surprising results of user testing, as well as the technical details of how to do speech to text from within a browser and plenty more.

Here's a promo video for the experience that gives you a flavour:



13 Reasons Why - Talk to the Reasons - Netflixfrom Moth + Flameon Vimeo.






Where to listen

Apple podcasts

Spotify

YouTube

CastBox

Spreaker

TuneIn

Breaker

Stitcher

PlayerFM

iHeartRadio






Links

www.talktothereasons.com

apollomatrix.com

]]>
The Rundown 001: Alexa settings API, 5 Google Assistant tips and more The Rundown 001: Alexa settings API, 5 Google Assistant tips and more Fri, 31 Aug 2018 04:56:00 GMT 44:43 5b87dc9a639983dc419339b8 no https://vux.world/alexa-settings-api-google-assistant-tips First in a new feature show looking at recent voice first announcements and how that impacts designers, developers and brands full 1 We're starting a new feature on VUX World: The Run Down. Dustin Coates and I are getting together each week (or bi-weekly) to discuss the recent happenings in the voice space and how that'll impact designers, developers and brands.




Alexa settings API

We're starting off by discussing the Amazon Alexa feature that developers have been clambering for since 2016: the settings API.

With the settings API, you can access the user's timezone (among other things) and use that within your skill to personalise the voice experience for your users. You can send them targeted push notificationsat the appropriate time and use their preferred weather measurement (Celsius or Fahrenheit).

We discuss Eric Olsen's (3PO Labs) in-depth review of the settings APIand how it could be the beginning of something bigger.




Scott Huffman's 5 insights on voice tech

We also discuss Scott Huffman's post (VP Engineering, Google Assistant) on the five insights on voice technologyand how they should impact your approach. For example, focusing on utilities and understanding what kind of things people use Assistant for at different times of day.




Voysis and Voicebot vCommerce study

We delve into the Voysis and Voicebot study on vCommerceand discuss how voice on mobile is so important, yet how it's bubbling away under the surface, not grabbing many headlines.




Alexa skills challenge, Storyline and icon creation

Finally, we discuss the latest Alexa Skills Challenge: Gamesin-skill purchases on Storyline (check out VUX World with Vasili Shynkarenka, CEO, Storyline) and the new Alexa feature that allows anyone to create icons for their skills.




Where to listen

Apple podcasts

Spotify

YouTube

CastBox

Spreaker

TuneIn

Breaker

Stitcher

PlayerFM

iHeartRadio




Other links

The Power of Habit book

Hooked book

]]>
We're starting a new feature on VUX World: The Run Down. Dustin Coates and I are getting together each week (or bi-weekly) to discuss the recent happenings in the voice space and how that'll impact designers, developers and brands.




Alexa settings API

We're starting off by discussing the Amazon Alexa feature that developers have been clambering for since 2016: the settings API.

With the settings API, you can access the user's timezone (among other things) and use that within your skill to personalise the voice experience for your users. You can send them targeted push notificationsat the appropriate time and use their preferred weather measurement (Celsius or Fahrenheit).

We discuss Eric Olsen's (3PO Labs) in-depth review of the settings APIand how it could be the beginning of something bigger.




Scott Huffman's 5 insights on voice tech

We also discuss Scott Huffman's post (VP Engineering, Google Assistant) on the five insights on voice technologyand how they should impact your approach. For example, focusing on utilities and understanding what kind of things people use Assistant for at different times of day.




Voysis and Voicebot vCommerce study

We delve into the Voysis and Voicebot study on vCommerceand discuss how voice on mobile is so important, yet how it's bubbling away under the surface, not grabbing many headlines.




Alexa skills challenge, Storyline and icon creation

Finally, we discuss the latest Alexa Skills Challenge: Gamesin-skill purchases on Storyline (check out VUX World with Vasili Shynkarenka, CEO, Storyline) and the new Alexa feature that allows anyone to create icons for their skills.




Where to listen

Apple podcasts

Spotify

YouTube

CastBox

Spreaker

TuneIn

Breaker

Stitcher

PlayerFM

iHeartRadio




Other links

The Power of Habit book

Hooked book

]]>
All about Voiceitt with Sara Smolley All about Voiceitt with Sara Smolley Mon, 27 Aug 2018 04:25:00 GMT 52:33 5b82fe9ccc66a1270aa6e57b no https://vux.world/voiceitt Digging deeper into the Alexa Accelerator 2018 contender Voiceitt, the startup giving people their voice back full Alexa Accelerator 2018 featured startup, Voiceitt, gives people with speech impairments their voice back. Today, we're joined by co-founder and VP Strategy, Sara Smolley, to hear all about it.

The There are millions of people across the globe who have non-standard speech. People who've had a stroke or who have multiple sclerosis or cerebral pausey, for example. Voiceitt's advanced speech recognition system, which is deployed through an app, allows those people to speak and be understood.

Once it's configured, all you do is speak through the app and Voiceitt will do the rest, handling speech to text and displaying the text on-screen whilst a synthetic voice speaks the words to you.

For all that's said about voice being accessible, Voiceitt's mission is to open up voice technology to the rest of the world.




Our Guest

After working in Hong Kong and South Korea in marketing and startup consulting, Sara moved to Tel Aviv to help build and establish Voiceitt. Sara travels across the globe working on the strategic side of the business, building relationships, gathering insights and bringing the powerful mission and technology that Voiceitt posses to the world.




Where to listen

Apple podcasts

Spotify

YouTube

CastBox

Spreaker

TuneIn

Breaker

Stitcher

PlayerFM

iHeartRadio




Links

http://www.voiceitt.com

 

]]>
Alexa Accelerator 2018 featured startup, Voiceitt, gives people with speech impairments their voice back. Today, we're joined by co-founder and VP Strategy, Sara Smolley, to hear all about it.

The There are millions of people across the globe who have non-standard speech. People who've had a stroke or who have multiple sclerosis or cerebral pausey, for example. Voiceitt's advanced speech recognition system, which is deployed through an app, allows those people to speak and be understood.

Once it's configured, all you do is speak through the app and Voiceitt will do the rest, handling speech to text and displaying the text on-screen whilst a synthetic voice speaks the words to you.

For all that's said about voice being accessible, Voiceitt's mission is to open up voice technology to the rest of the world.




Our Guest

After working in Hong Kong and South Korea in marketing and startup consulting, Sara moved to Tel Aviv to help build and establish Voiceitt. Sara travels across the globe working on the strategic side of the business, building relationships, gathering insights and bringing the powerful mission and technology that Voiceitt posses to the world.




Where to listen

Apple podcasts

Spotify

YouTube

CastBox

Spreaker

TuneIn

Breaker

Stitcher

PlayerFM

iHeartRadio




Links

http://www.voiceitt.com

 

]]>
Voice first design strategy with Ben Sauer Voice first design strategy with Ben Sauer Mon, 20 Aug 2018 04:19:00 GMT 1:00:27 5b796e4796f4c84b37982e49 no https://vux.world/voice-product-strategy From finding a use case to testing prototypes and everything in between and beyond full 30 Ben Sauer is a Design Strategist who's worked with some of the world's well known brands: Virgin, Tesco, Pearsons, British Gas, Penguin Random House, BBC. Ben worked with Clearleft as a Design Strategist for many years and more recently turned his attention to how voice will change design.

Over the last couple of years, Ben has been focusing on helping brands navigate the voice space and figure out how voice will impact their business, as well as where to start with a voice strategy.

Ben joins Dustin and I today to discuss the ins and outs of voice first design strategy, including finding a use case and the differences between voice design strategy and design strategy in general.




Where to listen

Links

Follow Ben Sauer on Twitter

Visit voiceprinciples.com

BenSauer.net

]]>
Ben Sauer is a Design Strategist who's worked with some of the world's well known brands: Virgin, Tesco, Pearsons, British Gas, Penguin Random House, BBC. Ben worked with Clearleft as a Design Strategist for many years and more recently turned his attention to how voice will change design.

Over the last couple of years, Ben has been focusing on helping brands navigate the voice space and figure out how voice will impact their business, as well as where to start with a voice strategy.

Ben joins Dustin and I today to discuss the ins and outs of voice first design strategy, including finding a use case and the differences between voice design strategy and design strategy in general.




Where to listen

Links

Follow Ben Sauer on Twitter

Visit voiceprinciples.com

BenSauer.net

]]>
Adoption, growth, in-skill purchases and developer rewards with Nick Schwab Adoption, growth, in-skill purchases and developer rewards with Nick Schwab Wed, 08 Aug 2018 04:30:00 GMT 38:20 5b66ad1bc8b1aa8f49d3509b no https://vux.world/in-skill-purchases-developer-rewards Nick Schwab tell us about how many people use his skills, his in-skill purchasing success and much more. full The first in a new series called 'Unscripted' where we have off-the-cuff, unscripted conversations with voice first leaders and practitioners to get acquainted, hear their story and find out how they do what they do.


In this first episode, we speak to Alexa Skill developing veteran, Nick Schwab, founder of Invoked Apps, about:


  • User adoption of his Ambient Sound skills (his daily usage is huge!)
  • In skill purchasing and his conversation rates (surprising!)
  • Developer rewards and how it all works
  • How much it costs to host a successful skill
  • Why now is the time for Europe to invest heavily
  • The discoverability crisis and what's changed


Follow Nick Schwab on Twitter

Check out Invoked Apps

]]>
The first in a new series called 'Unscripted' where we have off-the-cuff, unscripted conversations with voice first leaders and practitioners to get acquainted, hear their story and find out how they do what they do.


In this first episode, we speak to Alexa Skill developing veteran, Nick Schwab, founder of Invoked Apps, about:


  • User adoption of his Ambient Sound skills (his daily usage is huge!)
  • In skill purchasing and his conversation rates (surprising!)
  • Developer rewards and how it all works
  • How much it costs to host a successful skill
  • Why now is the time for Europe to invest heavily
  • The discoverability crisis and what's changed


Follow Nick Schwab on Twitter

Check out Invoked Apps

]]>
All about conversational commerce with Charlie Cadbury All about conversational commerce with Charlie Cadbury Mon, 06 Aug 2018 04:23:00 GMT 50:34 5b6295ca8aa374d1628f444e no https://vux.world/conversational-commerce Helping brands turn conversing strangers into paying customers. full 28 In this episode, we take a deep dive into conversational commerce: what it is, what's possible and how you can turn conversing strangers into paying customers.


Our guest

Charles Cadbury is the co-founder of Say It Now, a company that helps brands respond the the growing consumer need for immediacy. Charlie's history is impressive. He's seen more than 1,000 client briefs and delivered over 300 digital projects, many of them related to commerce. After working with Lola Tech to create the Dazzle platform, Charlie's attention remains focused on conversational interactions and helping brands convert conversations into commerce.


Where to listen


Links

Check out the Say It Now website

Follow Charles on Twitter


]]>
In this episode, we take a deep dive into conversational commerce: what it is, what's possible and how you can turn conversing strangers into paying customers.


Our guest

Charles Cadbury is the co-founder of Say It Now, a company that helps brands respond the the growing consumer need for immediacy. Charlie's history is impressive. He's seen more than 1,000 client briefs and delivered over 300 digital projects, many of them related to commerce. After working with Lola Tech to create the Dazzle platform, Charlie's attention remains focused on conversational interactions and helping brands convert conversations into commerce.


Where to listen


Links

Check out the Say It Now website

Follow Charles on Twitter


]]>
#VOICE18 with Tim Kahle and Dominik Meißner of 169 Labs #VOICE18 with Tim Kahle and Dominik Meißner of 169 Labs Mon, 30 Jul 2018 04:00:26 GMT 1:09:27 5b5daaaa8f25ed2939943cb5 no https://vux.world/voice18 Reviewing the key takeaways from the Voice Summit 2018 and discussing the opportunities of the next 6 months full 27 We celebrate the 6 month anniversary of VUX World by reviewing the modev Voice Summit event that took place last week in Newark. We anchor on the Voice Summit to take stock of 2018 and look forward to what brands, designers and developers should be focusing on over the next 6 months.


To guide us through #VOICE18, Dustin and I are joined by Tim Kahle and Dominik Meißner, founders of 169 Labs.




Win 2 free tickets to the All About Voice conference in Munich on 12th October


169 Labs are running a voice first conference of their own on 12th October in Munich: All About Voice.


For a chance to win 2 free tickets to the event, just send a tweet using #AllAboutVoice and answer the question: why is 2018 all about voice?


169 Labs will pick a random winner who'll receive 2 free tickets to the conference.


Use the code VUXWORLD to save 10%.


Buy tickets




Links


Check out the Voice Summit website

Visit the All About Voice website

Visit the 169 Labs website

See the 169 Labs and Amazon Twitch broadcast

See Tim Kahle's slides from his talk

Read Dustin Coates' write up on day 1

]]>
We celebrate the 6 month anniversary of VUX World by reviewing the modev Voice Summit event that took place last week in Newark. We anchor on the Voice Summit to take stock of 2018 and look forward to what brands, designers and developers should be focusing on over the next 6 months.


To guide us through #VOICE18, Dustin and I are joined by Tim Kahle and Dominik Meißner, founders of 169 Labs.




Win 2 free tickets to the All About Voice conference in Munich on 12th October


169 Labs are running a voice first conference of their own on 12th October in Munich: All About Voice.


For a chance to win 2 free tickets to the event, just send a tweet using #AllAboutVoice and answer the question: why is 2018 all about voice?


169 Labs will pick a random winner who'll receive 2 free tickets to the conference.


Use the code VUXWORLD to save 10%.


Buy tickets




Links


Check out the Voice Summit website

Visit the All About Voice website

Visit the 169 Labs website

See the 169 Labs and Amazon Twitch broadcast

See Tim Kahle's slides from his talk

Read Dustin Coates' write up on day 1

]]>
<![CDATA[The strategy, creativity and technology triangulation with RAIN's Will Hall and Jason Herndon]]> Mon, 23 Jul 2018 04:00:49 GMT 1:18:05 5b5440dd16dc658776c00549 no https://vux.world/strategy-creativity-technology full 26 This week, we’re speaking to RAIN agency’s Will Hall and Jason Herndon about how their three pillars of: strategy, creativity and technology, are leading the world's biggest brands to voice first success.


In this episode: voice strategy, creative prowess and technological genius

In this episode, RAIN’s Executive Creative Director, Will Hall, and VP, Engineering, Jason Herndon guide us through the practicalities of how they shape voice strategies and implement voice first solutions for the world's biggest brands.

Whether you're a brand, a designer or developer, this episode will help you understand how and where to start.

It’ll give you things to consider and help you align voice first initiatives with core business drivers.

It’ll show you what you can expect from working with (or at) a voice first agency and give you some examples of how industry-leading brands are approaching voice.

It’ll also present some of the challenges you’ll face and maybe even challenge your own thinking on whether your organisation is set-up for success, including showing you why 'systems thinking' is so important.

You'll understand how to hone-in on use cases that provide value.

You’ll learn how to structure a voice first project; the skills and resources you’ll need and who needs to be involved, as well as the process of going from nothing to implementing a world-leading voice experience.

It’ll show you tools that you can use for design and development, as well as guide you on the value of testing early.

It’ll also give you some ideas on how far ahead you should plan your roadmap and cover why a crawl, walk, run approach is most appropriate.

As ever, we go deep into all of the above and more - this episode is a longer one than usual, and it’s densely packed with nothing but insights.


Our guests

Will Hall is the Executive Creative Director at RAIN. Will has worked on countless projects for global brands and blends the strategy and creative sides of projects together, making sure that the strategic aims of clients are brought to fruition with the appropriate creative.

Jason Herndon, VP, Engineering at RAIN, has worked with the world's largest brands on technical architecture and development and, at RAIN, is responsible for turning big ideas into reality.


About RAIN

RAIN has worked with some of the world’s biggest brands on some of the most headline grabbing Alexa Skills.

Campbells Kitchen and Tide were two of the first branded Alexa Skills and are still cited today as pioneering examples of how valuable voice can be for brands.

The Warner Brothers’ Dunkirk interactive story, which we discussed in our episode on voice games with Florian Hollandt, pushed the boundaries on what’s possible on the Alexa platform and brought movie-like sound design and scripting to the voice first world.

RAIN help brands big and small figure out the strategic value in bringing voice to your business and guide brands through the creation, implementation, promotion and development of voice first experiences.


Where to listen

Links


]]>
This week, we’re speaking to RAIN agency’s Will Hall and Jason Herndon about how their three pillars of: strategy, creativity and technology, are leading the world's biggest brands to voice first success.


In this episode: voice strategy, creative prowess and technological genius

In this episode, RAIN’s Executive Creative Director, Will Hall, and VP, Engineering, Jason Herndon guide us through the practicalities of how they shape voice strategies and implement voice first solutions for the world's biggest brands.

Whether you're a brand, a designer or developer, this episode will help you understand how and where to start.

It’ll give you things to consider and help you align voice first initiatives with core business drivers.

It’ll show you what you can expect from working with (or at) a voice first agency and give you some examples of how industry-leading brands are approaching voice.

It’ll also present some of the challenges you’ll face and maybe even challenge your own thinking on whether your organisation is set-up for success, including showing you why 'systems thinking' is so important.

You'll understand how to hone-in on use cases that provide value.

You’ll learn how to structure a voice first project; the skills and resources you’ll need and who needs to be involved, as well as the process of going from nothing to implementing a world-leading voice experience.

It’ll show you tools that you can use for design and development, as well as guide you on the value of testing early.

It’ll also give you some ideas on how far ahead you should plan your roadmap and cover why a crawl, walk, run approach is most appropriate.

As ever, we go deep into all of the above and more - this episode is a longer one than usual, and it’s densely packed with nothing but insights.


Our guests

Will Hall is the Executive Creative Director at RAIN. Will has worked on countless projects for global brands and blends the strategy and creative sides of projects together, making sure that the strategic aims of clients are brought to fruition with the appropriate creative.

Jason Herndon, VP, Engineering at RAIN, has worked with the world's largest brands on technical architecture and development and, at RAIN, is responsible for turning big ideas into reality.


About RAIN

RAIN has worked with some of the world’s biggest brands on some of the most headline grabbing Alexa Skills.

Campbells Kitchen and Tide were two of the first branded Alexa Skills and are still cited today as pioneering examples of how valuable voice can be for brands.

The Warner Brothers’ Dunkirk interactive story, which we discussed in our episode on voice games with Florian Hollandt, pushed the boundaries on what’s possible on the Alexa platform and brought movie-like sound design and scripting to the voice first world.

RAIN help brands big and small figure out the strategic value in bringing voice to your business and guide brands through the creation, implementation, promotion and development of voice first experiences.


Where to listen

Links


]]>
<![CDATA[All about conversation design with PullString's Oren Jacob]]> Mon, 16 Jul 2018 04:30:00 GMT 1:06:33 5b49aad27c60b1cd1b2d271e no https://vux.world/conversation-design How to design conversational experiences for voice assistants and what to consider full 25


This week, we speak to conversation design master, Oren Jacob, about what it takes to create successful conversations with technology.

There are so many complexities in human conversation. When creating an Alexa Skill or Google Assistant Action, most designers try to mimic human conversation. Google itself has taken steps in this direction with the fabricated ‘mm hmm’ moments with Google Duplex.

But does all of this have an actual impact on the user experience? Does it make it better or worse? How natural is natural enough and does it matter?

What other factors contribute to conversation design that works?

PullString CEO and co-founder, Oren Jacob answers all in this week's episode.




In this episode on conversation design

We get deep into conversation design this week and discuss things like:

  • How natural should conversations with voice assistants be?
  • Why you shouldn't just try to mimic human conversation
  • The power of voice and what tools designers need to create compelling personas
  • Whether you should you use the built in text-to-speech (TTS) synthetic voice or record your own dialogue
  • How any why writing dialogue is entirely different from writing to be read
  • The similarities and differences between making a film and creating a conversational experience on a voice first device
  • The limitations and opportunities for improved audio capability and sound design
  • The importance of having an equal balance of creative and technical talent in teams
  • What it all means for brands and why you should start figuring that out now

Our guest

Oren Jacob, co-founder and CEO of Pullstring. Oren has worked in the space in between creativity and technology for two decades.

After spending 20 years working at Pixar on some of the company's classic films such as Toy Story and Finding Nemo, Oren created ToyTalk.

ToyTalk was a company that allowed kids to interact with their toys through voice.

As voice technology progressed and voice assistants and smart speakers were shaping up to take the world by storm, ToyTalk morphed into PullString, the enterprise-grade conversation design platform.




About Pullstring

For over half a decade, PullString's platform, software, and tools have been used to build some of the biggest and best computer conversation in market, with use cases and verticals as diverse as hospitality to home improvement and Hello Barbie to Destiny 2. It was also used to create, the latest in big-ticket skills, HBO 's Westworld: The Maze.




Where to listen

Links

Visit the PullString webiste

Follow PullString on Twitter

Read more about how the Westworld skill was created

Check out the details of the talk Oren will be giving at the VOICE Summit 18

Check out the details of Daniel Sinto's demo of PullString Conversehappening at the VOICE Summit 18

Check out the VOICE Summit website

]]>


This week, we speak to conversation design master, Oren Jacob, about what it takes to create successful conversations with technology.

There are so many complexities in human conversation. When creating an Alexa Skill or Google Assistant Action, most designers try to mimic human conversation. Google itself has taken steps in this direction with the fabricated ‘mm hmm’ moments with Google Duplex.

But does all of this have an actual impact on the user experience? Does it make it better or worse? How natural is natural enough and does it matter?

What other factors contribute to conversation design that works?

PullString CEO and co-founder, Oren Jacob answers all in this week's episode.




In this episode on conversation design

We get deep into conversation design this week and discuss things like:

  • How natural should conversations with voice assistants be?
  • Why you shouldn't just try to mimic human conversation
  • The power of voice and what tools designers need to create compelling personas
  • Whether you should you use the built in text-to-speech (TTS) synthetic voice or record your own dialogue
  • How any why writing dialogue is entirely different from writing to be read
  • The similarities and differences between making a film and creating a conversational experience on a voice first device
  • The limitations and opportunities for improved audio capability and sound design
  • The importance of having an equal balance of creative and technical talent in teams
  • What it all means for brands and why you should start figuring that out now

Our guest

Oren Jacob, co-founder and CEO of Pullstring. Oren has worked in the space in between creativity and technology for two decades.

After spending 20 years working at Pixar on some of the company's classic films such as Toy Story and Finding Nemo, Oren created ToyTalk.

ToyTalk was a company that allowed kids to interact with their toys through voice.

As voice technology progressed and voice assistants and smart speakers were shaping up to take the world by storm, ToyTalk morphed into PullString, the enterprise-grade conversation design platform.




About Pullstring

For over half a decade, PullString's platform, software, and tools have been used to build some of the biggest and best computer conversation in market, with use cases and verticals as diverse as hospitality to home improvement and Hello Barbie to Destiny 2. It was also used to create, the latest in big-ticket skills, HBO 's Westworld: The Maze.




Where to listen

Links

Visit the PullString webiste

Follow PullString on Twitter

Read more about how the Westworld skill was created

Check out the details of the talk Oren will be giving at the VOICE Summit 18

Check out the details of Daniel Sinto's demo of PullString Conversehappening at the VOICE Summit 18

Check out the VOICE Summit website

]]>
How to translate your Alexa Skill or Google Assistant Action with Maaike Dufour How to translate your Alexa Skill or Google Assistant Action with Maaike Dufour Mon, 09 Jul 2018 04:32:00 GMT 1:02:19 5b424ab4bf7bb7a274f0bb28 no https://vux.world/translate-alexa-skill Translating a voice experience takes more than translating the words in a script, it means transcreating the whole UX. This week, Maaike Dufour explains how. full 24 Translating your Alexa Skill or Google Assistant Action is about more than translating the words in your script. It's about translating the user experience. Maaike Dufour calls this 'transcreating' and she joins us this week to show us how it's done.




Why should you translate your Alexa Skill or Google Assistant Action?

The world is getting smaller. Technology has enabled us to reach and connect with people from every corner of the earth with ease.

Take this podcast for example. It’s listened to in over 40 different countries, most of which don’t speak English as a first language.

In fact, the vast majority of the world don’t speak English and certainly not as a first language.




Amazon Alexa is global

Amazon Alexa is localised for 11 countries at the time of writing. 5 of them don’t speak English as a first language (France, Germany, Austria, Japan, India).

For global brands, having your Alexa Skill or Google Assistant Action available in every country you do business is a no-brainer. But even for hobbyists and smaller scale developers, think about the population of those countries and the potential impact you could have if you Skill was to do well in those locales.




In this episode

We’re being guided through the importance of making your Alexa Skill or Google Action available in other languages and what steps you should take to make that happen.

We discuss why simply translating your Alexa Skill script won’t work and why you need to recreate the user experience in your desired language.

We cover some of the cultural differences between countries and give some examples of why that makes literal translations difficult. For example, the X-Factor in the UK is a nationally recognised TV show. Whereas, in France, it aired for one season and wasn’t well received. Therefore, referencing the X-Factor in a French Skill is pointless.

Maaike tells us about how, when transcreating your Alexa Skill, you might even need to change your entire persona due to the differences in how other cultures perceive different personas. For example, in the UK, a postman is simply someone who delivers mail. Whereas, in France, the postman is a close family friend who stops to chat and knows everybody in the street personally. In the UK, the postman is a distant stranger. In France, the postman is a close acquaintance. That makes for two entirely different personas.

We discuss examples of words and phrases that exist in one language but don’t in another and how that can both open up opportunities and sometimes present challenges.




Our guest

We’re joined by Maaike Dufour, Freelance Conversation UX Designer, co-founder of UX My Botand supreme transcreator of voice first applications. Maaike, quite rightly, prefers to use the term ‘transcreate’ instead of ‘translate’ because simply translating the words that make up your Alexa Skill or Google Assistant Action won’t work, as you’ll find out in this episode.

Maaike has worked on voice first UX for a number of years. Having worked with the Smartly.aiteam, Maaike now works with Labworks.ioand is helping the team break into international markets through the transcreation of popular Alexa Skills such as Would You Ratherinto other languages.




Where to listen

Links

Read Maaike's thoughts on Medium

Watch Maaike's talk at Chatbots and Voice Assistants London on YouTube

Follow Maaike on Twitter

Check out Maaike's website

Visit UX My Bot

]]>
Translating your Alexa Skill or Google Assistant Action is about more than translating the words in your script. It's about translating the user experience. Maaike Dufour calls this 'transcreating' and she joins us this week to show us how it's done.




Why should you translate your Alexa Skill or Google Assistant Action?

The world is getting smaller. Technology has enabled us to reach and connect with people from every corner of the earth with ease.

Take this podcast for example. It’s listened to in over 40 different countries, most of which don’t speak English as a first language.

In fact, the vast majority of the world don’t speak English and certainly not as a first language.




Amazon Alexa is global

Amazon Alexa is localised for 11 countries at the time of writing. 5 of them don’t speak English as a first language (France, Germany, Austria, Japan, India).

For global brands, having your Alexa Skill or Google Assistant Action available in every country you do business is a no-brainer. But even for hobbyists and smaller scale developers, think about the population of those countries and the potential impact you could have if you Skill was to do well in those locales.




In this episode

We’re being guided through the importance of making your Alexa Skill or Google Action available in other languages and what steps you should take to make that happen.

We discuss why simply translating your Alexa Skill script won’t work and why you need to recreate the user experience in your desired language.

We cover some of the cultural differences between countries and give some examples of why that makes literal translations difficult. For example, the X-Factor in the UK is a nationally recognised TV show. Whereas, in France, it aired for one season and wasn’t well received. Therefore, referencing the X-Factor in a French Skill is pointless.

Maaike tells us about how, when transcreating your Alexa Skill, you might even need to change your entire persona due to the differences in how other cultures perceive different personas. For example, in the UK, a postman is simply someone who delivers mail. Whereas, in France, the postman is a close family friend who stops to chat and knows everybody in the street personally. In the UK, the postman is a distant stranger. In France, the postman is a close acquaintance. That makes for two entirely different personas.

We discuss examples of words and phrases that exist in one language but don’t in another and how that can both open up opportunities and sometimes present challenges.




Our guest

We’re joined by Maaike Dufour, Freelance Conversation UX Designer, co-founder of UX My Botand supreme transcreator of voice first applications. Maaike, quite rightly, prefers to use the term ‘transcreate’ instead of ‘translate’ because simply translating the words that make up your Alexa Skill or Google Assistant Action won’t work, as you’ll find out in this episode.

Maaike has worked on voice first UX for a number of years. Having worked with the Smartly.aiteam, Maaike now works with Labworks.ioand is helping the team break into international markets through the transcreation of popular Alexa Skills such as Would You Ratherinto other languages.




Where to listen

Links

Read Maaike's thoughts on Medium

Watch Maaike's talk at Chatbots and Voice Assistants London on YouTube

Follow Maaike on Twitter

Check out Maaike's website

Visit UX My Bot

]]>
<![CDATA[How I built the world's best chatbot with Steve Worswick]]> Mon, 02 Jul 2018 04:10:00 GMT 1:06:51 5b388886499f6bae41bb3be0 no https://vux.world/worlds-best-chatbot We speak to the creator of the world’s best chatbot about how to design Loabner prize-winning conversational experiences. full 23 We speak to the creator of the world’s best chatbot about how to design Loabner prize-winning conversational experiences.

Steve Worswick is the creator of Mitsuku, the general conversation chatbot that has won the Loabner prizefor the last two year’s straight.

13 years in the making, Mitsuku passed the Turing testand convinced a panel of judges that it’s human over the course of a 20 minute conversation, two years in a row, to be crowned the world’s best chatbot and conversational agent.

It's featured in the Wall Street JournalBBCThe Guardianand Wired. And, unlike most chatbots that focus on serving a specific set of use cases, Mitsuku is a general conversational agent. That means you can speak to it about anything.

This week's Flash Briefing question is from Brielle Nickoloff of Witlingo: What would an open source voice assistant look like? Send us your thoughtsand you could feature on the VUX World Flash Briefingthis week!

What about voice?

Although Mitsuku is a text-based chatbot, this episode looks at how to take Steve’s 13 years of experience in creating conversational experiences and apply that to the voice first space.






In this episode

This episode is all about how to design and create a world-leading general conversational experience.

We get into detail about how Mitsuku is built (hint: it doesn’t use natural language processing or machine learning like most other conversational AI) and how Natural Language Processing-based conversational agents don’t quite hit the mark.

Steve tells us about Mitsuku’s rule-based supervised learning and how that’s leading to better experiences.

Despite Mitsuku passing the Turing test, Steve tells us why the Turing test is redundant.

We discuss user behaviour and how people treat a general conversational agent, from counselling to romance, bullying to marriage and money worries, and how to be sensitive on those topics.

We hear how varied responses can increase engagement. So much so that one person has spent 9 hours talking to Mitsuku!

We find out how to deal with pronoun resolution and how to refer back to what was said earlier in the conversation.

We uncover how brands are using Mitsuku as part of their conversational experiences, handing off to her when a user strays away from the use cases that their bot can handle.

We chat about how Alexa fairs against Mitsuku and hear where Siri would have finished if it was entered in to the Loabner prize competition.

Perhaps one of the most valuable lessons in this episode is the importance of persisting. Creating a conversational agent, a true conversational experience, takes time. It’s not a quick fix that you cobble together with a quick Alexa Skill. It takes years of development, iteration and constant improvement. But, if you stick with it, you might end up with the next best conversational agent.






Our guest

Steve Worswick started out in IT support and built Mitsuku as a passion project on the side. 13 years of hard work and 3 Loabner prizes later, he’s now working at the world’s largest chatbot agency and provider, PandoraBots.






Where to listen

Links

Contact Pandorabots

Check out Mitsuku on Pandorabots

Talk to Mitsuku

Check out Steve's talk at the Chatbots and Voice Assistants Londonevent

]]>
We speak to the creator of the world’s best chatbot about how to design Loabner prize-winning conversational experiences.

Steve Worswick is the creator of Mitsuku, the general conversation chatbot that has won the Loabner prizefor the last two year’s straight.

13 years in the making, Mitsuku passed the Turing testand convinced a panel of judges that it’s human over the course of a 20 minute conversation, two years in a row, to be crowned the world’s best chatbot and conversational agent.

It's featured in the Wall Street JournalBBCThe Guardianand Wired. And, unlike most chatbots that focus on serving a specific set of use cases, Mitsuku is a general conversational agent. That means you can speak to it about anything.

This week's Flash Briefing question is from Brielle Nickoloff of Witlingo: What would an open source voice assistant look like? Send us your thoughtsand you could feature on the VUX World Flash Briefingthis week!

What about voice?

Although Mitsuku is a text-based chatbot, this episode looks at how to take Steve’s 13 years of experience in creating conversational experiences and apply that to the voice first space.






In this episode

This episode is all about how to design and create a world-leading general conversational experience.

We get into detail about how Mitsuku is built (hint: it doesn’t use natural language processing or machine learning like most other conversational AI) and how Natural Language Processing-based conversational agents don’t quite hit the mark.

Steve tells us about Mitsuku’s rule-based supervised learning and how that’s leading to better experiences.

Despite Mitsuku passing the Turing test, Steve tells us why the Turing test is redundant.

We discuss user behaviour and how people treat a general conversational agent, from counselling to romance, bullying to marriage and money worries, and how to be sensitive on those topics.

We hear how varied responses can increase engagement. So much so that one person has spent 9 hours talking to Mitsuku!

We find out how to deal with pronoun resolution and how to refer back to what was said earlier in the conversation.

We uncover how brands are using Mitsuku as part of their conversational experiences, handing off to her when a user strays away from the use cases that their bot can handle.

We chat about how Alexa fairs against Mitsuku and hear where Siri would have finished if it was entered in to the Loabner prize competition.

Perhaps one of the most valuable lessons in this episode is the importance of persisting. Creating a conversational agent, a true conversational experience, takes time. It’s not a quick fix that you cobble together with a quick Alexa Skill. It takes years of development, iteration and constant improvement. But, if you stick with it, you might end up with the next best conversational agent.






Our guest

Steve Worswick started out in IT support and built Mitsuku as a passion project on the side. 13 years of hard work and 3 Loabner prizes later, he’s now working at the world’s largest chatbot agency and provider, PandoraBots.






Where to listen

Links

Contact Pandorabots

Check out Mitsuku on Pandorabots

Talk to Mitsuku

Check out Steve's talk at the Chatbots and Voice Assistants Londonevent

]]>
<![CDATA[Helping brands bridge the gap with Witlingo's Brielle Nickoloff and Luciana Morais]]> Mon, 25 Jun 2018 04:11:00 GMT 59:20 5b300e46dda624020c882693 no https://vux.world/helping-brands-bridge-the-gap full 22 This week, we're finding out how brands can get started and enter the voice first world of smart speakers and digital assistants.

Me and Dustin Coates are joined by one of the top US voice first agencies, Witlingo. We speak with two Lead VUX designers, Luciana Morias and Brielle Nickoloff, about how your brand can bridge the gap over to voice.




In this episode

Brielle and Luciana share how they guide brands through the process of discovering their voice and establishing a voice first presence.

We discuss the new challenge of working out what your brand sounds like and how to determine whether to focus on voice first content or voice as a service.

They discuss how brands should be playing the long game and the challenge of convincing clients to start small and adopt a continuous improvement culture to grow their voice first capability.

We chat about figuring out whether your should repurpose existing content or create new and discuss some of the great guides to voice design that Witlingo produce, including the guide to making your Facebook content voice friendly.




Our guests

Luciana Morais has a background in UX research and analysis and has a wealth of design experience. Now working at Witlingo as UX Lead and VUI Designer.

Brielle Nickoloff has a background in linguistics and has published a study on The use of profane threats and insults in the Anthropomorphization of digital voice assistants. Brielle is also Lead Voice User Experience Research and Design at Witlingo.




Where to listen

Links

Visit the Witlingo website

Follow Witlingo on Twitter

Read Witlingo's VUI assessment guidelines

Read Witlingo's Facebook guidelines

Follow Brielle on Twitter

Follow Luciana on Twitter

Check out the Ubiquitous Voice Society

Read Brielle's paper: The use of profane threats and insults in the Anthropomorphization of digital voice assistants

It's about the interface stupid

]]>
This week, we're finding out how brands can get started and enter the voice first world of smart speakers and digital assistants.

Me and Dustin Coates are joined by one of the top US voice first agencies, Witlingo. We speak with two Lead VUX designers, Luciana Morias and Brielle Nickoloff, about how your brand can bridge the gap over to voice.




In this episode

Brielle and Luciana share how they guide brands through the process of discovering their voice and establishing a voice first presence.

We discuss the new challenge of working out what your brand sounds like and how to determine whether to focus on voice first content or voice as a service.

They discuss how brands should be playing the long game and the challenge of convincing clients to start small and adopt a continuous improvement culture to grow their voice first capability.

We chat about figuring out whether your should repurpose existing content or create new and discuss some of the great guides to voice design that Witlingo produce, including the guide to making your Facebook content voice friendly.




Our guests

Luciana Morais has a background in UX research and analysis and has a wealth of design experience. Now working at Witlingo as UX Lead and VUI Designer.

Brielle Nickoloff has a background in linguistics and has published a study on The use of profane threats and insults in the Anthropomorphization of digital voice assistants. Brielle is also Lead Voice User Experience Research and Design at Witlingo.




Where to listen

Links

Visit the Witlingo website

Follow Witlingo on Twitter

Read Witlingo's VUI assessment guidelines

Read Witlingo's Facebook guidelines

Follow Brielle on Twitter

Follow Luciana on Twitter

Check out the Ubiquitous Voice Society

Read Brielle's paper: The use of profane threats and insults in the Anthropomorphization of digital voice assistants

It's about the interface stupid

]]>
All about Speakeasy AI with the Fresh Prince of AI, Frank Schneider All about Speakeasy AI with the Fresh Prince of AI, Frank Schneider Mon, 18 Jun 2018 04:08:00 GMT 1:00:27 5b2610b35e5b4ddb4047fdfd yes https://vux.world/speakeasy-ai Delivering the promise of AI in voice and capturing intent without speech-to-text full 21 This week, me and Dustin are speaking with the Fresh Prince of AI, Frank Schneider, about how Speakeasy AI aims to deliver the promise of AI in voice (that’s a lot of AI’s).

How many people truly understand what their customers are asking for? Whether it’s in your Alexa Skill, your chatbot or in your IVR, you can’t hope to serve the needs of your users or customers if you don’t understand what they’re trying to do or ask.


Understanding is the most important first step you can take

Once you truly understand the current situation, you can realise whether you’re meeting your existing customer needs, and how well you’re doing that.

Through gathering understanding, you can also work out where you’re failing and where the opportunities for improvement or expansion are.

That then helps you improve and plan for the future.

Speakeasy AI is helping businesses understand what their customers are trying to accomplish on a wide variety of conversational platforms by extracting the intent from any conversation.

Its patent-pending technology, called Speech-to-Intent, doesn’t use the typical speech-to-text engine that most voice-first platforms use. Instead, it analyses the actual audio in real time through funnelling it through a pipeline of different ‘top secret’ micro services.

This means that low audio quality and accents have no effect on its ability to understand customer intent. Plus, it also allows for further understanding of context.


In this episode

Dustin Coates and I hear from Frank Schneider, CEO, Speakeasy AI, about the current state of play in the AI field and touch on the amount of bullshit that exists right now.

We discuss how conversational understanding works and why speech-to-text might not be the most optimum way to capture intent.

We delve into the ins and outs of Speakeasy AI and get the low-down on its patent-pending Speech-to-Intent technology and hear how it could be a better way of understanding customer intents, regardless of audio quality or accents.

Frank tells us all about how Speakeasy AI can help businesses improve any conversational platform. He shares the opportunities that exist in the IVR space and how much untapped potential there is for businesses who’re willing to listen.

We've discussed VUI design for IVR with Simonie Wilsonrecently, and it would seem that you could use Speakeasy AI as part of a discovery piece of work to figure out where to start, then use Simonie's techniques to begin making improvements.

We also chat about the challenges of the AI industry and how working together could bring progress.


Our guest

Frank was born and raised in Philly and, after spending 9 years in education, including teaching at a school for high school kids who committed felonies, he transitioned into technology sales and marketing, where he’s spent the last 13 years.

He’s consulted and led teams providing solutions in various SaaS and AI solutions for contact centers and B2B. He was the first sales executive at Creative Virtual USA and helped grow the team from 12 to 40 employees. After a successful exit, his former CEO is now funding his new venture, Speakeasy AI.


Where to listen

Links

Visit the Speakeasy AI website

Follow Speakeasy AI on Twitter

]]>
This week, me and Dustin are speaking with the Fresh Prince of AI, Frank Schneider, about how Speakeasy AI aims to deliver the promise of AI in voice (that’s a lot of AI’s).

How many people truly understand what their customers are asking for? Whether it’s in your Alexa Skill, your chatbot or in your IVR, you can’t hope to serve the needs of your users or customers if you don’t understand what they’re trying to do or ask.


Understanding is the most important first step you can take

Once you truly understand the current situation, you can realise whether you’re meeting your existing customer needs, and how well you’re doing that.

Through gathering understanding, you can also work out where you’re failing and where the opportunities for improvement or expansion are.

That then helps you improve and plan for the future.

Speakeasy AI is helping businesses understand what their customers are trying to accomplish on a wide variety of conversational platforms by extracting the intent from any conversation.

Its patent-pending technology, called Speech-to-Intent, doesn’t use the typical speech-to-text engine that most voice-first platforms use. Instead, it analyses the actual audio in real time through funnelling it through a pipeline of different ‘top secret’ micro services.

This means that low audio quality and accents have no effect on its ability to understand customer intent. Plus, it also allows for further understanding of context.


In this episode

Dustin Coates and I hear from Frank Schneider, CEO, Speakeasy AI, about the current state of play in the AI field and touch on the amount of bullshit that exists right now.

We discuss how conversational understanding works and why speech-to-text might not be the most optimum way to capture intent.

We delve into the ins and outs of Speakeasy AI and get the low-down on its patent-pending Speech-to-Intent technology and hear how it could be a better way of understanding customer intents, regardless of audio quality or accents.

Frank tells us all about how Speakeasy AI can help businesses improve any conversational platform. He shares the opportunities that exist in the IVR space and how much untapped potential there is for businesses who’re willing to listen.

We've discussed VUI design for IVR with Simonie Wilsonrecently, and it would seem that you could use Speakeasy AI as part of a discovery piece of work to figure out where to start, then use Simonie's techniques to begin making improvements.

We also chat about the challenges of the AI industry and how working together could bring progress.


Our guest

Frank was born and raised in Philly and, after spending 9 years in education, including teaching at a school for high school kids who committed felonies, he transitioned into technology sales and marketing, where he’s spent the last 13 years.

He’s consulted and led teams providing solutions in various SaaS and AI solutions for contact centers and B2B. He was the first sales executive at Creative Virtual USA and helped grow the team from 12 to 40 employees. After a successful exit, his former CEO is now funding his new venture, Speakeasy AI.


Where to listen

Links

Visit the Speakeasy AI website

Follow Speakeasy AI on Twitter

]]>
All about Alpha Voice with Bryan Colligan All about Alpha Voice with Bryan Colligan Mon, 11 Jun 2018 04:00:00 GMT 59:29 5b1b7a577c78f2c616b56287 no https://vux.world/alpha-voice/ Making your podcast and YouTube content findable on voice full 20 This week, we’re finding out how content creators can have their podcasts and YouTube content indexed and searchable on voice, with Bryan Colligan of Alpha Voice.

With the podcast industry thriving and more people listening to podcasts than ever, more brands are starting to launch their own podcasts. Podcasts are a perfect fit for devices like the Echo and Google Home because they provide ambient entertainment, similar to the widely popular relaxation sounds skills.

Two problems face podcast and content creators: how do you make your podcast discoverable in the first place and how do you allow people to search through your backlog of episodes in order to find something that interests them?

Podcast discoverability is almost as much of a problem as Alexa Skill discoverability. Although Google is beginning to do its bitto help podcasts be discovered online, what about on voice?

This is the problem Alpha Voice aims to solve.

Help others get their skill passed first time by sharing your skill certification stories: Send us your tipsand you could feature on the VUX World Flash Briefingthis week!

What is Alpha Voice?

Alpha Voiceindexes your podcast or YouTube content and makes it all searchable on Alexa via your own Alexa Skill.

And it’s not just the podcast titles and guests you can search for. You can search for anything at all that interests you and the platform will search within your content to find your search term, then recommend that episode for you to listen to.


In this episode

We’re talking to Alpha Voice co-founder, Bryan Colligan, about how the platform works, how he and his co-founder built it and what value it gives content creators.

We also get into detail about how the VUX of search works on voice: processing and serving potentially hundreds of search results. How do you determine which ones to display to the user?

We also discuss:

  • The 5 ways to monetise content
  • Skill certification inconsistencies, including censorship and 'unwritten rules’
  • How you can get up and running with Alpha Voice

We wrap up by telling you all about the VUX World Alexa Skill, built using Alpha Voice! (U.S. only right now but will be available in EU soon.)


Our guest

Bryan Colligan is an entrepreneur and the co-founder of Alpha Voice. Bryan is based in Silicon Valley, has founded a series of startups and has been helping startups create mobile apps and improve their SEO for the last 10 years.

After reading the Mary Meeker internet trends report and learning that Google can understand 96% of what humans say, Bryan has turned his attention to the voice-first world.

After a number of failed experiments, he stumbled across the idea for Alpha Voice and is now helping content creators have their content found on Alexa.


Where to listen

Links


]]>
This week, we’re finding out how content creators can have their podcasts and YouTube content indexed and searchable on voice, with Bryan Colligan of Alpha Voice.

With the podcast industry thriving and more people listening to podcasts than ever, more brands are starting to launch their own podcasts. Podcasts are a perfect fit for devices like the Echo and Google Home because they provide ambient entertainment, similar to the widely popular relaxation sounds skills.

Two problems face podcast and content creators: how do you make your podcast discoverable in the first place and how do you allow people to search through your backlog of episodes in order to find something that interests them?

Podcast discoverability is almost as much of a problem as Alexa Skill discoverability. Although Google is beginning to do its bitto help podcasts be discovered online, what about on voice?

This is the problem Alpha Voice aims to solve.

Help others get their skill passed first time by sharing your skill certification stories: Send us your tipsand you could feature on the VUX World Flash Briefingthis week!

What is Alpha Voice?

Alpha Voiceindexes your podcast or YouTube content and makes it all searchable on Alexa via your own Alexa Skill.

And it’s not just the podcast titles and guests you can search for. You can search for anything at all that interests you and the platform will search within your content to find your search term, then recommend that episode for you to listen to.


In this episode

We’re talking to Alpha Voice co-founder, Bryan Colligan, about how the platform works, how he and his co-founder built it and what value it gives content creators.

We also get into detail about how the VUX of search works on voice: processing and serving potentially hundreds of search results. How do you determine which ones to display to the user?

We also discuss:

  • The 5 ways to monetise content
  • Skill certification inconsistencies, including censorship and 'unwritten rules’
  • How you can get up and running with Alpha Voice

We wrap up by telling you all about the VUX World Alexa Skill, built using Alpha Voice! (U.S. only right now but will be available in EU soon.)


Our guest

Bryan Colligan is an entrepreneur and the co-founder of Alpha Voice. Bryan is based in Silicon Valley, has founded a series of startups and has been helping startups create mobile apps and improve their SEO for the last 10 years.

After reading the Mary Meeker internet trends report and learning that Google can understand 96% of what humans say, Bryan has turned his attention to the voice-first world.

After a number of failed experiments, he stumbled across the idea for Alpha Voice and is now helping content creators have their content found on Alexa.


Where to listen

Links


]]>
Voice analytics and Dashbot with Arte Merritt Voice analytics and Dashbot with Arte Merritt Mon, 04 Jun 2018 03:57:00 GMT 49:08 5b12406f59525a0e071bfa5c no https://vux.world/voice-analytics/ How to understand and improve your voice first user experience full 19 This week, we’re getting deep into voice analytics and will help you learn more about how you can understand the performance of your voice first experience.

One of the biggest benefits that technology has given us is the ability to understand. To understand whether our latest PPC campaign had an impact on sales. To understand whether our new website increased our leads. To understand whether our pricing tweak made a difference on click through rates. To understand whether our foray into Facebook is sending more traffic. To understand whether our customers are satisfied.

Tools such as Google Analytics have been providing this kind of value to website owners for years. Tracking where your users come from (Google, Facebook etc), what they do when they arrive and whether they convert are the cornerstones of understanding website performance.




What about voice analytics?

With the introduction of new mediums such as conversational chatbots and voice first applications on platforms such as Alexa and GoogleAssistant, how do you understand the performance of these things?

How do you know if your Alexa Skill or Google Action is successful? Send us your answersand you could feature on the VUX World Flash Briefingthis week!

Can you apply the same rules as the web? Can you even access the same data? Is there some new metrics that matter more? And how can you use all of this to understand and improve the performance and use of your product?

Well, that’s what you’re about to find out.




In this episode

We’re speaking to Dashbot.ioCEO Arte Merritt all about the conversational analytics platform and how you can understand whether your conversational experience is working for your users.

We discuss the kind of metrics Dashbot provide including:

  • No. users
  • Repeat users
  • Time per session
  • Retention
  • Sentiment analysis
  • Message funnels
  • Intent funnels
  • Top exit messages
  • AI performance
  • Goals
  • Behaviour flow
  • Conversation flow

Arte tells us some case studies of how the tool has been used to understand and then improve conversational experiences.

We discuss some of the challenges with conversational analytics and how they relate to the voice first space and we hear about where voice analytics are heading in the future.




Our guest

Arte Merritt has worked in mobile and analytics for 20 years. He built an analytics platform which he sold it to Nokia before turning his attention to fill a gap in the market when he realised that Slack didn’t have any analytics. Dashbot was born and its been serving conversational designers ever since, helping them understand and improve their chatbots and voice applications. Since its creation, Dashbot has analysed 32 billion messages and counting!




Where to listen

Links

]]>
This week, we’re getting deep into voice analytics and will help you learn more about how you can understand the performance of your voice first experience.

One of the biggest benefits that technology has given us is the ability to understand. To understand whether our latest PPC campaign had an impact on sales. To understand whether our new website increased our leads. To understand whether our pricing tweak made a difference on click through rates. To understand whether our foray into Facebook is sending more traffic. To understand whether our customers are satisfied.

Tools such as Google Analytics have been providing this kind of value to website owners for years. Tracking where your users come from (Google, Facebook etc), what they do when they arrive and whether they convert are the cornerstones of understanding website performance.




What about voice analytics?

With the introduction of new mediums such as conversational chatbots and voice first applications on platforms such as Alexa and GoogleAssistant, how do you understand the performance of these things?

How do you know if your Alexa Skill or Google Action is successful? Send us your answersand you could feature on the VUX World Flash Briefingthis week!

Can you apply the same rules as the web? Can you even access the same data? Is there some new metrics that matter more? And how can you use all of this to understand and improve the performance and use of your product?

Well, that’s what you’re about to find out.




In this episode

We’re speaking to Dashbot.ioCEO Arte Merritt all about the conversational analytics platform and how you can understand whether your conversational experience is working for your users.

We discuss the kind of metrics Dashbot provide including:

  • No. users
  • Repeat users
  • Time per session
  • Retention
  • Sentiment analysis
  • Message funnels
  • Intent funnels
  • Top exit messages
  • AI performance
  • Goals
  • Behaviour flow
  • Conversation flow

Arte tells us some case studies of how the tool has been used to understand and then improve conversational experiences.

We discuss some of the challenges with conversational analytics and how they relate to the voice first space and we hear about where voice analytics are heading in the future.




Our guest

Arte Merritt has worked in mobile and analytics for 20 years. He built an analytics platform which he sold it to Nokia before turning his attention to fill a gap in the market when he realised that Slack didn’t have any analytics. Dashbot was born and its been serving conversational designers ever since, helping them understand and improve their chatbots and voice applications. Since its creation, Dashbot has analysed 32 billion messages and counting!




Where to listen

Links

]]>
All about Pindrop, VUI design and VUI tuning with Simonie Wilson All about Pindrop, VUI design and VUI tuning with Simonie Wilson Mon, 28 May 2018 04:06:00 GMT 57:35 5b064d0c53e05063084e4e55 no https://vux.world/pindrop-vui-design-and-vui-tuning Getting into detail about voice first security and the practicalities on VUI tuning full 18 This week, we take a look at the similarities between VUI design for IVR and VUI design for voice assistants. We also explain what VUI tuning is and why it’s important, whilst giving you some tips on how you can tune your voice user interface. We also discuss PinDrop and voice first security.


In this episode

We speak to one of the world’s expert VUI practitioners, Simonie Wilson, to get under the hood of Passport and figure out what it is, how it works, why it’s needed and how you can use it to authenticate users with confidence whilst preventing fraud.

We also tap into Simonie’s vast VUI design experience and discuss how she goes about designing VUIs that delight rather than smite customers. We get into detail about the benefits of VUI tuning and Simonie shares her advice on how you can continuously improve a VUI experience.

Are brands failing on Amazon Alexa and Google Assistant? Send us your answers and you could feature on the VUX World Flash Briefing this week!

Privacy and security

Privacy is often cited as a barrier and a challenge in the voice first space. How do you authenticate a user, build trust and enable people to transact in a frictionless way, all without a long, drawn out, failure stricken on-boarding process?

PinDrop is changing that with it’s product, Passport: a fool-proof way to recognise whether someone is who they say they are simply by the sound of their voice. It works in all voice first areas and can even tell whether the voice is synthetic.

Here's an example of it working with Alexa:



There is so much potential in the voice first space from a vcommerce, health and financial management perspective that technology such as this could smooth over the cracks in the verification process and enable people to transact more seamlessly in a voice first world.


Our Guest

Simonie Wilson is the queen of VUI design. With over 20 years experience working in the speech and VUI design space, Simonie's career has included working with large companies such as Microsoft and GM, small companies such as startups and contracting too. Simonie has knowledge and experience in the VUI design space that few others do and is one of the few people to have extensive experience with VUI tuning.

Simonie is madly passionate about VUI design and, in this episode, shares all of that passion and some real lessons and insights from her experience that’ll help all VUI designers improve what they do.


Where to listen

Links

Visit the PinDrop website

Check out PinDrop on YouTube

PinDrop on Facebook

PinDrop on Twitter

Connect with Simonie on LinkedIn

Email Simonie

Read about PinDrop Passport in Forbes

]]>
This week, we take a look at the similarities between VUI design for IVR and VUI design for voice assistants. We also explain what VUI tuning is and why it’s important, whilst giving you some tips on how you can tune your voice user interface. We also discuss PinDrop and voice first security.


In this episode

We speak to one of the world’s expert VUI practitioners, Simonie Wilson, to get under the hood of Passport and figure out what it is, how it works, why it’s needed and how you can use it to authenticate users with confidence whilst preventing fraud.

We also tap into Simonie’s vast VUI design experience and discuss how she goes about designing VUIs that delight rather than smite customers. We get into detail about the benefits of VUI tuning and Simonie shares her advice on how you can continuously improve a VUI experience.

Are brands failing on Amazon Alexa and Google Assistant? Send us your answers and you could feature on the VUX World Flash Briefing this week!

Privacy and security

Privacy is often cited as a barrier and a challenge in the voice first space. How do you authenticate a user, build trust and enable people to transact in a frictionless way, all without a long, drawn out, failure stricken on-boarding process?

PinDrop is changing that with it’s product, Passport: a fool-proof way to recognise whether someone is who they say they are simply by the sound of their voice. It works in all voice first areas and can even tell whether the voice is synthetic.

Here's an example of it working with Alexa:



There is so much potential in the voice first space from a vcommerce, health and financial management perspective that technology such as this could smooth over the cracks in the verification process and enable people to transact more seamlessly in a voice first world.


Our Guest

Simonie Wilson is the queen of VUI design. With over 20 years experience working in the speech and VUI design space, Simonie's career has included working with large companies such as Microsoft and GM, small companies such as startups and contracting too. Simonie has knowledge and experience in the VUI design space that few others do and is one of the few people to have extensive experience with VUI tuning.

Simonie is madly passionate about VUI design and, in this episode, shares all of that passion and some real lessons and insights from her experience that’ll help all VUI designers improve what they do.


Where to listen

Links

Visit the PinDrop website

Check out PinDrop on YouTube

PinDrop on Facebook

PinDrop on Twitter

Connect with Simonie on LinkedIn

Email Simonie

Read about PinDrop Passport in Forbes

]]>
All about BotTalk and how to run a voice first discovery workshop with Andrey Esaulov All about BotTalk and how to run a voice first discovery workshop with Andrey Esaulov Mon, 21 May 2018 04:00:00 GMT 1:04:46 5afbdbb12faeed8e04002ae5 no https://vux.world/bottalk-voice-first-discovery-workshop Showing you how to host an effective discovery workshop and build skills using BotTalk full 17 This week, we’re digging into how you can create an Alexa Skill using BotTalk and we give you a template for running a voice first discovery workshop, with SmartHaus Technologies CEO and BotTalk co-founder, Andrey Esaulov.

We discuss the importance of starting with a solid use case and how imperative it is to base your voice app on a real-world scenario that’ll add value to your users.

What turns an average voice experience into an EPIC voice experience? Send us your answers and you could feature on the VUX World Flash Briefing this week!

We then dive deep into the practical detail of how to approach designing a voice first user experience with BotTalk and find out more about the language it’s built in: YAML. We discuss what BotTalk is, how it’s different from some of the other tools on the market, how it works, it’s features and how you can get up and running.

Finally, Andrey takes us through a voice first discovery workshop template that he uses with clients in order to take a brand from zero to hero: from ideation to prototype, and how you can do the same too.

We also traverse some other interesting conversational landscapes such as the concept of skill-first companies: brands that pop up as skills which are the core of the business, like an app is for Instagram. We chat about Artificial Intelligence and how intelligent it actually is in the voice first space. We touch on managing client expectations, monetisation and how voice is making waves in Germany.


About BotTalk

The current selection skill building tools on the market are at opposite ends of the technical spectrum. Some tools require you to know how to code from the ground-up, like Jovo and be a skilled back-end developer. Others have a drag and drop interface and don’t require any coding at all, like Storyline.

BotTalk bridges the gap between those two worlds with a tool that’s aimed at UX designers who have some basic coding knowledge, like HTML and CSS. It provides some of the technical capability you’d expect if you built something from scratch, whilst providing a more simple coding language: YAML. Think of it as HTML for voice.


Our Guest

Andrey Esaulov is the CEO of SmartHaus Technologies, which specialise in growth hacking in the mobile space, and the co-founder of BotTalk, a voice first and bot application building platform.

Andrey has a computer science background, with expensive experience in the start up world and mobile growth space, as well as a PhD in Linguistics and Literacy.

Andrey’s skillset is a perfect match for this industry and his knowledge in this area is vast. Couple his computer science and linguistics knowledge with his skills in working with clients and delivering growth and you’ve got a perfect recipe for success.


Links

Check out BotTalk

Follow Andrey on Twitter

Join the BotTalk Facebook community

Follow BotTalk on Insta

Watch the BotTalk tutorials on YouTube

Visit the Smarthaus Technologies website

Join the Alexa Slack channel

Enable the VUX World Flash Briefing

Feature on this week's Flash Briefing

Where to listen


]]>
This week, we’re digging into how you can create an Alexa Skill using BotTalk and we give you a template for running a voice first discovery workshop, with SmartHaus Technologies CEO and BotTalk co-founder, Andrey Esaulov.

We discuss the importance of starting with a solid use case and how imperative it is to base your voice app on a real-world scenario that’ll add value to your users.

What turns an average voice experience into an EPIC voice experience? Send us your answers and you could feature on the VUX World Flash Briefing this week!

We then dive deep into the practical detail of how to approach designing a voice first user experience with BotTalk and find out more about the language it’s built in: YAML. We discuss what BotTalk is, how it’s different from some of the other tools on the market, how it works, it’s features and how you can get up and running.

Finally, Andrey takes us through a voice first discovery workshop template that he uses with clients in order to take a brand from zero to hero: from ideation to prototype, and how you can do the same too.

We also traverse some other interesting conversational landscapes such as the concept of skill-first companies: brands that pop up as skills which are the core of the business, like an app is for Instagram. We chat about Artificial Intelligence and how intelligent it actually is in the voice first space. We touch on managing client expectations, monetisation and how voice is making waves in Germany.


About BotTalk

The current selection skill building tools on the market are at opposite ends of the technical spectrum. Some tools require you to know how to code from the ground-up, like Jovo and be a skilled back-end developer. Others have a drag and drop interface and don’t require any coding at all, like Storyline.

BotTalk bridges the gap between those two worlds with a tool that’s aimed at UX designers who have some basic coding knowledge, like HTML and CSS. It provides some of the technical capability you’d expect if you built something from scratch, whilst providing a more simple coding language: YAML. Think of it as HTML for voice.


Our Guest

Andrey Esaulov is the CEO of SmartHaus Technologies, which specialise in growth hacking in the mobile space, and the co-founder of BotTalk, a voice first and bot application building platform.

Andrey has a computer science background, with expensive experience in the start up world and mobile growth space, as well as a PhD in Linguistics and Literacy.

Andrey’s skillset is a perfect match for this industry and his knowledge in this area is vast. Couple his computer science and linguistics knowledge with his skills in working with clients and delivering growth and you’ve got a perfect recipe for success.


Links

Check out BotTalk

Follow Andrey on Twitter

Join the BotTalk Facebook community

Follow BotTalk on Insta

Watch the BotTalk tutorials on YouTube

Visit the Smarthaus Technologies website

Join the Alexa Slack channel

Enable the VUX World Flash Briefing

Feature on this week's Flash Briefing

Where to listen


]]>
All about voice search with the SEO Oracle, Dr. Pete All about voice search with the SEO Oracle, Dr. Pete Mon, 14 May 2018 04:00:50 GMT 1:02:13 5af53fc74af21e970211c991 no https://vux.world/voice-search With over 1 billion voice searches happening each month, how can you be the singularity: the top spot in voice search? full 16 Dr. Pete, Marketing Scientist at Moz, and world-leading SEO oracle, tells all about the voice search landscape, and how you can rank for searches on digital assistants like Google Assistant and Amazon Alexa.


This is a jam-packed episode with deep, deep insights, advice and guidance on all things voice search related. We'll give you practical ways to compete to be the answer that’s read out in voice first searches, as well as some notions on the current and potential future benefit that could bring.


Voice search

There are all kinds of stats around voice search, which we’ve touched upon before.


With more people using their voice to search, how will that affect search marketers, content creators and brands?


What’s the difference between a voice search and a typed search?


Is there anything you can do to appear in voice search results?


We speak to one of the search industry's top sources of SEO knowledge, Dr. Pete, to find out.


Getting deep into voice search

In this episode, we’re discussing the differences between voice search on mobile, voice first search on smart speakers and typed search.


We discuss the absence of search engine results pages (SERPs) in a voice first environment and increased competition for the singularity: the top spot in voice search.


We chat about the search landscape, the effect voice is having on search, changing user behaviour and expectations, new search use cases and multi modal implications, challenges and opportunities.


We get into detail about how voice search works on devices such as Google Assistant and Google Home. This includes debating Google’s knowledge graph and it’s advantages and disadvantages in a voice first context.


We look at the practicalities of serving search results via voice. This touches on the different types of search results, such as featured snippets, and how voice handles different data formats such as tables. We get into detail about the different types of featured snippets available and how each translate to work (or not work) on voice.


We discuss Dr. Pete’s work and studies in the voice first space including his piece ‘What I learned from 1,000 voice searches' and what he found.


We wrap up with some practical tips that you can use right now to start preparing for the influx of voice searches that’ll be hitting the air waves soon and help you start to rank in a voice first environment.


Our Guest


Dr. Pete Myers (a.k.a Dr. Pete a.k.a. the Oracle) is the Marketing Scientist at Moz, the SEO giant and search industry leader.


Dr. Pete has been an influential search marketer since 2012 and has spent years studying Google’s search algorithm, advising clients and the SEO industry on best practice and guiding the industry into the future.

His research and writing on the topic has been helping brands keep on top of the search space, improve their rankings and business performance and has helped keep Moz at the top of the industry.


Mozhas been at the top of the SEO chain since 2004 and is trusted by the whole SEO industry as the place to go for SEO tooling, insights and practical guidance.


Links


Where to listen

]]>
Dr. Pete, Marketing Scientist at Moz, and world-leading SEO oracle, tells all about the voice search landscape, and how you can rank for searches on digital assistants like Google Assistant and Amazon Alexa.


This is a jam-packed episode with deep, deep insights, advice and guidance on all things voice search related. We'll give you practical ways to compete to be the answer that’s read out in voice first searches, as well as some notions on the current and potential future benefit that could bring.


Voice search

There are all kinds of stats around voice search, which we’ve touched upon before.


With more people using their voice to search, how will that affect search marketers, content creators and brands?


What’s the difference between a voice search and a typed search?


Is there anything you can do to appear in voice search results?


We speak to one of the search industry's top sources of SEO knowledge, Dr. Pete, to find out.


Getting deep into voice search

In this episode, we’re discussing the differences between voice search on mobile, voice first search on smart speakers and typed search.


We discuss the absence of search engine results pages (SERPs) in a voice first environment and increased competition for the singularity: the top spot in voice search.


We chat about the search landscape, the effect voice is having on search, changing user behaviour and expectations, new search use cases and multi modal implications, challenges and opportunities.


We get into detail about how voice search works on devices such as Google Assistant and Google Home. This includes debating Google’s knowledge graph and it’s advantages and disadvantages in a voice first context.


We look at the practicalities of serving search results via voice. This touches on the different types of search results, such as featured snippets, and how voice handles different data formats such as tables. We get into detail about the different types of featured snippets available and how each translate to work (or not work) on voice.


We discuss Dr. Pete’s work and studies in the voice first space including his piece ‘What I learned from 1,000 voice searches' and what he found.


We wrap up with some practical tips that you can use right now to start preparing for the influx of voice searches that’ll be hitting the air waves soon and help you start to rank in a voice first environment.


Our Guest


Dr. Pete Myers (a.k.a Dr. Pete a.k.a. the Oracle) is the Marketing Scientist at Moz, the SEO giant and search industry leader.


Dr. Pete has been an influential search marketer since 2012 and has spent years studying Google’s search algorithm, advising clients and the SEO industry on best practice and guiding the industry into the future.

His research and writing on the topic has been helping brands keep on top of the search space, improve their rankings and business performance and has helped keep Moz at the top of the industry.


Mozhas been at the top of the SEO chain since 2004 and is trusted by the whole SEO industry as the place to go for SEO tooling, insights and practical guidance.


Links


Where to listen

]]>
All about Voysis and the GUI to VUI transition with Brian Colcord All about Voysis and the GUI to VUI transition with Brian Colcord Mon, 07 May 2018 03:31:00 GMT 52:25 5ae81879840b09bc329898a8 no https://vux.world/all-about-voysis Taking a close look at the Voysis platform and discussing transitioning from GUI to VUI design with VP of Design, Brian Colcord. full 15 We’ve covered plenty of voice first designand developmenton this podcast. Well, that’s what the podcast is, so we’re bound to! Most of what we’ve discussed has largely been voice assistant or smart speaker-focused. We haven’t covered a huge amount of voice first application in the browser and on mobile, until now.


Mic check

You’ll have noticed the little mic symbol popping up on a number of websites lately. It’s in the Google search bar, it’s on websites such as EchoSim and Spotify are trialing it too. When you press that mic symbol, it enables your mic on whatever device you’re using and lets you speak your search term.


Next time you see that mic, you could be looking at the entry point to Voysis.

On a lot of websites, that search may well just use the website’s standard search tool to perform the search. With


Voysis, its engine will perform the search for you using its voice tech stack.


That means that you can perform more elaborate searches that most search engines would struggle with. For example:

“Show me Nike Air Max trainers, size 8, in black, under $150”


Most search engines would freak out at this, but not Voysis. That’s what it does.

Of course, it’s more than an ecommerce search tool, as we’ll find out during this episode.


In this episode

We discuss how approaches to new technology seem to wrongly follow a reincarnation route. Turning print into web by using the same principles that govern print. Turning online into mobile by using the same principles that govern the web. Then taking the practices and principles of GUI and transferring that to VUI. We touch on why moving you app to voice is the wrong approach.


We also discuss:

  • Voysis - what it is and what it does
  • Getting sophisticated with searches
  • Designing purely for voice vs multi modal
  • The challenge of ecommerce with a zero UI
  • The nuance between the GUI assistant and voice only assistants
  • How multi modal voice experiences can help the shopping experience
  • Making the transition from GUI to VUI
  • The similarities between moving from web to mobile and from mobile to voice - (when moving to mobile, you had to think about gestures and smaller screens)
  • Error states and points of delight
  • The difference between designing for voice and designing for a screen
  • Testing for voice
  • Understand voice first ergonomics


Our Guest

Brian Colcord, VP of Design at Voysis, is a world-leading designer, cool, calm and collected speaker and passionate sneaker head.


After designing the early versions of the JoinMe brand markings and UI, he was recruited by LogMeIn and went on to be one of the first designers to work on the Apple Watch prior to its release.


Brian has made the transition from GUI to VUI design and shares with us his passion for voice, how he made the transition, what he learned and how you can do it too.


About Voysis

Voysis is a Dublin-based voice technology company that believes voice interactions can be as natural as human ones and are working intently to give brands the capability to have natural language interactions with customers.


Links


Check out the Voysis website

Follow Voysis on Twitter

Read the Voysis blog

Join Brian on LinkedIn

Follow Brian on Twitter

Listen to the AI in industry podcast with Voysis CEO, Peter Cahill

Read Brian's post, You're already a voice designer, you just don't know it yet


Where to listen


]]>
We’ve covered plenty of voice first designand developmenton this podcast. Well, that’s what the podcast is, so we’re bound to! Most of what we’ve discussed has largely been voice assistant or smart speaker-focused. We haven’t covered a huge amount of voice first application in the browser and on mobile, until now.


Mic check

You’ll have noticed the little mic symbol popping up on a number of websites lately. It’s in the Google search bar, it’s on websites such as EchoSim and Spotify are trialing it too. When you press that mic symbol, it enables your mic on whatever device you’re using and lets you speak your search term.


Next time you see that mic, you could be looking at the entry point to Voysis.

On a lot of websites, that search may well just use the website’s standard search tool to perform the search. With


Voysis, its engine will perform the search for you using its voice tech stack.


That means that you can perform more elaborate searches that most search engines would struggle with. For example:

“Show me Nike Air Max trainers, size 8, in black, under $150”


Most search engines would freak out at this, but not Voysis. That’s what it does.

Of course, it’s more than an ecommerce search tool, as we’ll find out during this episode.


In this episode

We discuss how approaches to new technology seem to wrongly follow a reincarnation route. Turning print into web by using the same principles that govern print. Turning online into mobile by using the same principles that govern the web. Then taking the practices and principles of GUI and transferring that to VUI. We touch on why moving you app to voice is the wrong approach.


We also discuss:

  • Voysis - what it is and what it does
  • Getting sophisticated with searches
  • Designing purely for voice vs multi modal
  • The challenge of ecommerce with a zero UI
  • The nuance between the GUI assistant and voice only assistants
  • How multi modal voice experiences can help the shopping experience
  • Making the transition from GUI to VUI
  • The similarities between moving from web to mobile and from mobile to voice - (when moving to mobile, you had to think about gestures and smaller screens)
  • Error states and points of delight
  • The difference between designing for voice and designing for a screen
  • Testing for voice
  • Understand voice first ergonomics


Our Guest

Brian Colcord, VP of Design at Voysis, is a world-leading designer, cool, calm and collected speaker and passionate sneaker head.


After designing the early versions of the JoinMe brand markings and UI, he was recruited by LogMeIn and went on to be one of the first designers to work on the Apple Watch prior to its release.


Brian has made the transition from GUI to VUI design and shares with us his passion for voice, how he made the transition, what he learned and how you can do it too.


About Voysis

Voysis is a Dublin-based voice technology company that believes voice interactions can be as natural as human ones and are working intently to give brands the capability to have natural language interactions with customers.


Links


Check out the Voysis website

Follow Voysis on Twitter

Read the Voysis blog

Join Brian on LinkedIn

Follow Brian on Twitter

Listen to the AI in industry podcast with Voysis CEO, Peter Cahill

Read Brian's post, You're already a voice designer, you just don't know it yet


Where to listen


]]>
All about voice first games with Florian Hollandt All about voice first games with Florian Hollandt Mon, 30 Apr 2018 04:00:00 GMT 57:26 5ae44f24928bf5cf7926bc26 no https://vux.world/voice-first-games/ Tools and techniques for creating voice first games full 14 Voice first games are one of the most popular Amazon Alexa skill categories. So what type of voice games are available? And how do you create them? We speak to game developer and reviewer, Florian Hollandt, to find out.


Games are helping Alexa take off. According to Voicebot.ai, Alexa Skill games are the second most popular skill categorybehind smart home skills. Amazon has been encouraging the development of games, too. We've seen the Alexa Skills Challenge: Kids recently and I'd say it’s more than likely that most of the developer rewards will have gone to game developers, given the engaging nature of games.


We’ve touched upon voice first games on the podcast previously, such as our chat with Jo Jaquinta of Tsa Tsa Tzu, but we haven’t yet covered audio game development in detail, which is what we’ll do today.


Creating voice first games

In this episode, we’ll be getting into detail about the different kids of voice first games that are out there, as well as looking at some of the techniques you can use to create engaging games such as interactive stories.


We’ll cover things like:

  • Naming a game and how a name can reduce discoverability
  • The challenge of providing content
  • The one game per month challenge
  • The types of games that are available on Amazon Alexa
  • Game design techniques
  • Interactive story game development techniques
  • Fake decisions - what are they and how can you use them to enhance engagement


Our Guest

Florian Hollandt is the Product Manager at Jovo, the cross platform voice app platform, and is also an Alexa game developer and reviewer. He’s created some popular games on Alexa, such as the German card game, Mau Mau, and has written a ton of voice first game reviews on Medium.


Florian is madly passionate about voice first games and his knowledge on the subject is impressive. He guides us through his experience and shares some delightful tips on how you can start creating voice first games yourself.


Links

Some of the things Florian spoke about:


]]>
Voice first games are one of the most popular Amazon Alexa skill categories. So what type of voice games are available? And how do you create them? We speak to game developer and reviewer, Florian Hollandt, to find out.


Games are helping Alexa take off. According to Voicebot.ai, Alexa Skill games are the second most popular skill categorybehind smart home skills. Amazon has been encouraging the development of games, too. We've seen the Alexa Skills Challenge: Kids recently and I'd say it’s more than likely that most of the developer rewards will have gone to game developers, given the engaging nature of games.


We’ve touched upon voice first games on the podcast previously, such as our chat with Jo Jaquinta of Tsa Tsa Tzu, but we haven’t yet covered audio game development in detail, which is what we’ll do today.


Creating voice first games

In this episode, we’ll be getting into detail about the different kids of voice first games that are out there, as well as looking at some of the techniques you can use to create engaging games such as interactive stories.


We’ll cover things like:

  • Naming a game and how a name can reduce discoverability
  • The challenge of providing content
  • The one game per month challenge
  • The types of games that are available on Amazon Alexa
  • Game design techniques
  • Interactive story game development techniques
  • Fake decisions - what are they and how can you use them to enhance engagement


Our Guest

Florian Hollandt is the Product Manager at Jovo, the cross platform voice app platform, and is also an Alexa game developer and reviewer. He’s created some popular games on Alexa, such as the German card game, Mau Mau, and has written a ton of voice first game reviews on Medium.


Florian is madly passionate about voice first games and his knowledge on the subject is impressive. He guides us through his experience and shares some delightful tips on how you can start creating voice first games yourself.


Links

Some of the things Florian spoke about:


]]>
Turning Alexa for Business into a business with Bob Stolzberg Turning Alexa for Business into a business with Bob Stolzberg Mon, 23 Apr 2018 04:00:00 GMT 51:13 5ad98c744b7b3f235adbc971 no https://vux.world/turning-alexa-for-business-into-a-business Today, we’re following the story of the inspirational Bob Stolzberg of VoiceXP, and giving you some deep insights into how you can turn Alexa for Business into a business. full 13 Today, we’re following the story of the inspirational Bob Stolzberg of VoiceXP, and giving you some deep insights into how you can turn Alexa for Business into a business.


In this episode, Dustin and I are getting into the detail of how VoiceXP came to be, how Bob almost made $14,500 profit from his first Alexa Skill, why voice is such a big opportunity and how he turned Alexa for Business into a business.


We’re also discussing the features that come with Amazon Alexa for Business and some example use cases taken from Bob’s experience, as well as plenty of other areas such as:


  • Selling to corporate clients
  • The difference between a skill builder and a business
  • The risk of using amazon alexa in business
  • Security concerns and DR compliance
  • The risks that corporate clients face and mitigations
  • The importance of being a Amazon partner
  • Private vs public skills
  • Locking down devices
  • Use cases and future use cases
  • Reporting and analytics
  • Agnostic roadmaps
  • The hard work required to start a startup


Our Guest

After spending 20 years working in the enterprise IT field, Bob Stolzberg founded VoiceXP, the voice first company that helps businesses create efficiencies and increase productivity through voice. Bob and his team work with enterprise clients and SMEs to implement Alexa for Business within organisations. From designing and building specific skills for clients, to the full implementation of the devices and platform.


Bob’s experience of the enterprise IT environment gives him a unique understanding of the corporate IT world, the kind of people that make purchasing decisions and the kind of risks or concerns IT professionals will perceive with new technology platforms such as this. He’s managed to overcome those concerns, mitigate those risks and build a thriving business that’s just joined one of the top startup accelerators in the US, Capital Innovators.


Bob’s an immensely engaging and passionate guy, and offers some amazing guidance and pointers for anyone looking to turn voice into a business. This is a truly inspirational listen.


Links

 


Where to listen


]]>
Today, we’re following the story of the inspirational Bob Stolzberg of VoiceXP, and giving you some deep insights into how you can turn Alexa for Business into a business.


In this episode, Dustin and I are getting into the detail of how VoiceXP came to be, how Bob almost made $14,500 profit from his first Alexa Skill, why voice is such a big opportunity and how he turned Alexa for Business into a business.


We’re also discussing the features that come with Amazon Alexa for Business and some example use cases taken from Bob’s experience, as well as plenty of other areas such as:


  • Selling to corporate clients
  • The difference between a skill builder and a business
  • The risk of using amazon alexa in business
  • Security concerns and DR compliance
  • The risks that corporate clients face and mitigations
  • The importance of being a Amazon partner
  • Private vs public skills
  • Locking down devices
  • Use cases and future use cases
  • Reporting and analytics
  • Agnostic roadmaps
  • The hard work required to start a startup


Our Guest

After spending 20 years working in the enterprise IT field, Bob Stolzberg founded VoiceXP, the voice first company that helps businesses create efficiencies and increase productivity through voice. Bob and his team work with enterprise clients and SMEs to implement Alexa for Business within organisations. From designing and building specific skills for clients, to the full implementation of the devices and platform.


Bob’s experience of the enterprise IT environment gives him a unique understanding of the corporate IT world, the kind of people that make purchasing decisions and the kind of risks or concerns IT professionals will perceive with new technology platforms such as this. He’s managed to overcome those concerns, mitigate those risks and build a thriving business that’s just joined one of the top startup accelerators in the US, Capital Innovators.


Bob’s an immensely engaging and passionate guy, and offers some amazing guidance and pointers for anyone looking to turn voice into a business. This is a truly inspirational listen.


Links

 


Where to listen


]]>
How people REALLY use Amazon Alexa with Martin Porcheron How people REALLY use Amazon Alexa with Martin Porcheron Mon, 16 Apr 2018 05:55:45 GMT 1:10:37 5ad43ae1c51eb4310908e832 no https://shows.pippa.io/vux-world/how-people-really-use-amazon-alexa full Today, we’re discussing the findings of Martin Porcheron’s study, ‘Voice interfaces in everyday life’. We uncover insights into how people actually use Amazon Alexa in the home. We find unique user behaviour, new technology challenges and understand what it all means for voice UX designers, developers and brands.


Voice interfaces in everyday life

Imagine if you could eaves drop into someone's house and listen to how they interact with their Amazon Echo. Imagine, whenever someone said “Alexa”, you were there. Imagine being able to hear everything thing that was said for an entire minute before the word “Alexa” was uttered, and then stick around for a whole 60 seconds after the interaction with Alexa was over.

Well, that’s exactly what today’s guest and his associates did, and his findings offer some unique lessons for VUX designers, developers and brands that’ll help you create more natural voice user experiences that work.


In this episode, we’re discussing:

  • How people use digital assistants in public
  • The background of Voice interfaces in everyday life
  • The challenge of what you call your Alexa skill
  • The issue of recall
  • How Amazon can improve skill usage
  • The inherent problem of discoverability in voice
  • How Echo use is finely integrated into other activities
  • The implications of treating an Echo as a single user device
  • The challenge of speech recognition in the ‘hurly burly’ of moderns life
  • How people collaboratively attempt to solve interaction problems
  • What is ‘political’ control and how does it apply to voice first devices?
  • Pranking people’s Alexa and the effect on future Amazon advertising
  • Designing for device control
  • Why these devices aren’t actually conversational
  • The importance of responses

Key takeaways for designers and developers

  • Give your skill a name that’s easy for recall
  • Make your responses succinct, fit within a busy and crowded environment
  • Make sure your responses are a resource for further action - how will the user do the next thing?
  • Consider designing for multiple users
  • Don’t use long intros and tutorials, get straight to the point
  • Don’t design for a conversation, design to get things done

Our Guest

Martin Porcheron is a Research Associate in the Mixed Reality Lab at the University of Nottingham and has a PhD in Ubiquitous Computing, a sub-set of Computer Science. Martin has conducted several studies in the field of human-computer interaction, including looking at how people make use of mobile phones in conversations i.e. how people use something like Siri mid-conversation and how those interactions unfold.

Martin’s angle isn’t to look at these things as critical or problematic, but to approach them as an opportunity to learn about how people make use of technology currently. He believe this enables us to make more informed design decisions.

The study we discuss today has won many plaudits including Best Paper Award at the CHI 2018 conference.


Links


Where you can listen:


]]>
Today, we’re discussing the findings of Martin Porcheron’s study, ‘Voice interfaces in everyday life’. We uncover insights into how people actually use Amazon Alexa in the home. We find unique user behaviour, new technology challenges and understand what it all means for voice UX designers, developers and brands.


Voice interfaces in everyday life

Imagine if you could eaves drop into someone's house and listen to how they interact with their Amazon Echo. Imagine, whenever someone said “Alexa”, you were there. Imagine being able to hear everything thing that was said for an entire minute before the word “Alexa” was uttered, and then stick around for a whole 60 seconds after the interaction with Alexa was over.

Well, that’s exactly what today’s guest and his associates did, and his findings offer some unique lessons for VUX designers, developers and brands that’ll help you create more natural voice user experiences that work.


In this episode, we’re discussing:

  • How people use digital assistants in public
  • The background of Voice interfaces in everyday life
  • The challenge of what you call your Alexa skill
  • The issue of recall
  • How Amazon can improve skill usage
  • The inherent problem of discoverability in voice
  • How Echo use is finely integrated into other activities
  • The implications of treating an Echo as a single user device
  • The challenge of speech recognition in the ‘hurly burly’ of moderns life
  • How people collaboratively attempt to solve interaction problems
  • What is ‘political’ control and how does it apply to voice first devices?
  • Pranking people’s Alexa and the effect on future Amazon advertising
  • Designing for device control
  • Why these devices aren’t actually conversational
  • The importance of responses

Key takeaways for designers and developers

  • Give your skill a name that’s easy for recall
  • Make your responses succinct, fit within a busy and crowded environment
  • Make sure your responses are a resource for further action - how will the user do the next thing?
  • Consider designing for multiple users
  • Don’t use long intros and tutorials, get straight to the point
  • Don’t design for a conversation, design to get things done

Our Guest

Martin Porcheron is a Research Associate in the Mixed Reality Lab at the University of Nottingham and has a PhD in Ubiquitous Computing, a sub-set of Computer Science. Martin has conducted several studies in the field of human-computer interaction, including looking at how people make use of mobile phones in conversations i.e. how people use something like Siri mid-conversation and how those interactions unfold.

Martin’s angle isn’t to look at these things as critical or problematic, but to approach them as an opportunity to learn about how people make use of technology currently. He believe this enables us to make more informed design decisions.

The study we discuss today has won many plaudits including Best Paper Award at the CHI 2018 conference.


Links


Where you can listen:


]]>
Tackling the challenges of discoverability and monetisation on Amazon Alexa with Jo Jaquinta Tackling the challenges of discoverability and monetisation on Amazon Alexa with Jo Jaquinta Mon, 09 Apr 2018 05:28:59 GMT 1:19:09 5acaf9b282cc21353715603b no https://vux.world/challenges-discoverability-and-monetisation-amazon-alexa Getting deep into the biggest challenges facing creators on Alexa: being discovered and making money full Today, we're getting deep into the biggest challenges facing designers and developers on the Alexa platform: being discovered and making money. And who better to take us through it, than one of the most experienced developers on the voice scene, Jo 'the Oracle' Jaquinta.


Speak to anyone who's serious about voice first development and they'll tell you the two biggest challenges facing the voice first world right now are skill discoverability and monetisation. Vasili Shynkarenka of Storyline mentioned it and so did Matt Hartman of Betaworks when they featured on the VUX World podcast previously.


However, we rarely hear stories from people who've tried everything they can to overcome these challenges. Until now.


In this episode, we're joined by Dustin Coates as co-host and we're speaking to Jo about his vast experience of designing and developing on the Amazon Alexa platform and how he's approached tackling those two big challenges.


We also discuss voice UX design techniques that Jo's picked up along the way, as well as the tools and techniques he uses for developing skills.


This one is jam-packed with epic insights from someone who few know more than in this space right now, and includes discussion on a vast array of subjects including:


Discoverability:

  • The impact of advertising on increasing skill adoption
  • The effect of being featured in the Amazon Alexa newsletter
  • What Amazon can do to help skill discovery
  • How transferring between modalities can loose users


Monetisation:

  • The challenges of turning skill development into a business
  • The difference between Google’s and Amazon’s strategy
  • The two ways to make money from voice: the easy way and the hard way
  • Why a monetisation API shouldn't be the focus for developers
  • Why Amazon Alexa developer payouts are bad for the voice environment


Design:

  • The challenges of designing for voice with a screen
  • How immersive audio games help the visually impaired
  • How Amazon could improve the UX for users by moving to a 'streaming' approach to voice
  • Why you shouldn’t be aiming for a ‘conversational’ experience
  • What is the method of Loci and how can it be used when designing for voice?


Development:

  • Fuzzy matching
  • Building and maintaining your own library and SDK
  • Cross platform development


Other gems include:

  • Structural problems with the Alexa platform
  • How company culture affects voice strategy
  • Why it’s not early days in voice
  • Alexa for business and privacy


Our Guest

Jo Jaquinta is a software developer with over 20 years' experience. Jo started building skills on the Alexa platform a short time after it was released, has created a host of interesting skills and learned plenty along the way through pulling Alexa in all kinds of different directions. His knowledge, experience and plenty of lessons learned were all applied in building Jo's most recent skill, the madly complex, 6 Swords.


Jo shares plenty of his voice design and development knowledge on his YouTube channel, which is full of engaging and interesting insights, and has put pen to paper to share his knowledge in the shape of two books on Alexa: How to Program Amazon Echo and Developing Amazon Alexa Games. He's also active on the Alexa Slack channel, helping people solve their development problems and consulting on voice design and development.


What Jo doesn't know about developing on Alexa isn't worth knowing. His immense knowledge and vast experience in this area are pretty much unrivalled, which is why I refer to him as 'the Oracle'.




Links


Where to Listen:

]]>
Today, we're getting deep into the biggest challenges facing designers and developers on the Alexa platform: being discovered and making money. And who better to take us through it, than one of the most experienced developers on the voice scene, Jo 'the Oracle' Jaquinta.


Speak to anyone who's serious about voice first development and they'll tell you the two biggest challenges facing the voice first world right now are skill discoverability and monetisation. Vasili Shynkarenka of Storyline mentioned it and so did Matt Hartman of Betaworks when they featured on the VUX World podcast previously.


However, we rarely hear stories from people who've tried everything they can to overcome these challenges. Until now.


In this episode, we're joined by Dustin Coates as co-host and we're speaking to Jo about his vast experience of designing and developing on the Amazon Alexa platform and how he's approached tackling those two big challenges.


We also discuss voice UX design techniques that Jo's picked up along the way, as well as the tools and techniques he uses for developing skills.


This one is jam-packed with epic insights from someone who few know more than in this space right now, and includes discussion on a vast array of subjects including:


Discoverability:

  • The impact of advertising on increasing skill adoption
  • The effect of being featured in the Amazon Alexa newsletter
  • What Amazon can do to help skill discovery
  • How transferring between modalities can loose users


Monetisation:

  • The challenges of turning skill development into a business
  • The difference between Google’s and Amazon’s strategy
  • The two ways to make money from voice: the easy way and the hard way
  • Why a monetisation API shouldn't be the focus for developers
  • Why Amazon Alexa developer payouts are bad for the voice environment


Design:

  • The challenges of designing for voice with a screen
  • How immersive audio games help the visually impaired
  • How Amazon could improve the UX for users by moving to a 'streaming' approach to voice
  • Why you shouldn’t be aiming for a ‘conversational’ experience
  • What is the method of Loci and how can it be used when designing for voice?


Development:

  • Fuzzy matching
  • Building and maintaining your own library and SDK
  • Cross platform development


Other gems include:

  • Structural problems with the Alexa platform
  • How company culture affects voice strategy
  • Why it’s not early days in voice
  • Alexa for business and privacy


Our Guest

Jo Jaquinta is a software developer with over 20 years' experience. Jo started building skills on the Alexa platform a short time after it was released, has created a host of interesting skills and learned plenty along the way through pulling Alexa in all kinds of different directions. His knowledge, experience and plenty of lessons learned were all applied in building Jo's most recent skill, the madly complex, 6 Swords.


Jo shares plenty of his voice design and development knowledge on his YouTube channel, which is full of engaging and interesting insights, and has put pen to paper to share his knowledge in the shape of two books on Alexa: How to Program Amazon Echo and Developing Amazon Alexa Games. He's also active on the Alexa Slack channel, helping people solve their development problems and consulting on voice design and development.


What Jo doesn't know about developing on Alexa isn't worth knowing. His immense knowledge and vast experience in this area are pretty much unrivalled, which is why I refer to him as 'the Oracle'.




Links


Where to Listen:

]]>
My first 30 days as a VUI designer with Ilana Shalowitz and Brian Bauman My first 30 days as a VUI designer with Ilana Shalowitz and Brian Bauman Mon, 02 Apr 2018 05:57:30 GMT 1:00:56 5ac1c64a76380bf04c806d91 no https://vux.world/my-first-30-days-as-a-vui-designer/ full Today, we’re getting into detail about what it’s like to be a full-time VUI designer. We’re discussing the details of the role, the day to day duties and the skillsets that are important to succeed in designing voice user interfaces.

The role of a VUI designer has been around for a while, but it’s not so common. However, with the rise of voice as an access point for controlling technology, this is one of the roles of the future.


If you’re planning for that future and are considering seeking work in the voice first space; or if you’re a voice first design hobbyist looking to take it full-time; or if you’re generally interested in what it takes to create conversational interfaces, then this is a great episode for you.


We’re joined by two professional VUI designers, Ilana Shalowitz and Brian Bauman of Emmi, and together they’ll be taking us through the ins and outs of the role that designs voice user interfaces for Emmi’s care calls.


In this episode

Ilana takes us through an overview of the VUI designer role and discusses what skillsets are important. She takes us through the interview process, bedding in, and drops some detailed knowledge voice user interface design based on her years of experience in the field.


Brian then takes us through the role in more detail and looks at the specifics of the role, where a VUI designer fits into a project, what the day to day activities and duties are, and what he found during his first 30 days.


We also discuss things like:

  • How to pronounce VUI (V.U.I. or "Vooey")
  • The difference between chat bot design and conversational vui
  • What is prosity and why is it important
  • Language
  • Breathing
  • Error recovery
  • Directing voice talent
  • Reporting and measuring success
  • Broader voice user interface design tips


Our guests

Ilana Shalowitz is the VUI Design Manager at Emmi and has a background in marketing and design. Ilana is forming a great reputation in the voice first space and is quickly becoming a leading voice for voice in the healthcare sector. She featured at the Alexa Conference 2018, spoke at the AI Summit 2018, has featured on the VoiceFirst.FM Voice of Healthcare podcast (Episode 5) and is a keynotes speaker at the Voice of Healthcare Summit in August in Boston.


Brian Bauman is a former playwright and joined Emmi recently, taking on his first role as a VUI designer. Brian has a background in the creative arts and is a former playwright. He fills us in on what his first month as a VUI designer was like and how his creative background gave him some valuable transferable skills.


About Emmi

Emmi solutions is part of the Wolters Kluwer stable and helps care organisations extend the reach of their care through using technology.


Ilana and Brian both wore on the automated voice-based outbound calls side of the company. They create call scripts and dialogue flows that are turned into real calls that patients receive and can interact with in conversation. This means that healthcare providers can speak to thousands of patients without needing make make any manual calls at all.


Links


]]>
Today, we’re getting into detail about what it’s like to be a full-time VUI designer. We’re discussing the details of the role, the day to day duties and the skillsets that are important to succeed in designing voice user interfaces.

The role of a VUI designer has been around for a while, but it’s not so common. However, with the rise of voice as an access point for controlling technology, this is one of the roles of the future.


If you’re planning for that future and are considering seeking work in the voice first space; or if you’re a voice first design hobbyist looking to take it full-time; or if you’re generally interested in what it takes to create conversational interfaces, then this is a great episode for you.


We’re joined by two professional VUI designers, Ilana Shalowitz and Brian Bauman of Emmi, and together they’ll be taking us through the ins and outs of the role that designs voice user interfaces for Emmi’s care calls.


In this episode

Ilana takes us through an overview of the VUI designer role and discusses what skillsets are important. She takes us through the interview process, bedding in, and drops some detailed knowledge voice user interface design based on her years of experience in the field.


Brian then takes us through the role in more detail and looks at the specifics of the role, where a VUI designer fits into a project, what the day to day activities and duties are, and what he found during his first 30 days.


We also discuss things like:

  • How to pronounce VUI (V.U.I. or "Vooey")
  • The difference between chat bot design and conversational vui
  • What is prosity and why is it important
  • Language
  • Breathing
  • Error recovery
  • Directing voice talent
  • Reporting and measuring success
  • Broader voice user interface design tips


Our guests

Ilana Shalowitz is the VUI Design Manager at Emmi and has a background in marketing and design. Ilana is forming a great reputation in the voice first space and is quickly becoming a leading voice for voice in the healthcare sector. She featured at the Alexa Conference 2018, spoke at the AI Summit 2018, has featured on the VoiceFirst.FM Voice of Healthcare podcast (Episode 5) and is a keynotes speaker at the Voice of Healthcare Summit in August in Boston.


Brian Bauman is a former playwright and joined Emmi recently, taking on his first role as a VUI designer. Brian has a background in the creative arts and is a former playwright. He fills us in on what his first month as a VUI designer was like and how his creative background gave him some valuable transferable skills.


About Emmi

Emmi solutions is part of the Wolters Kluwer stable and helps care organisations extend the reach of their care through using technology.


Ilana and Brian both wore on the automated voice-based outbound calls side of the company. They create call scripts and dialogue flows that are turned into real calls that patients receive and can interact with in conversation. This means that healthcare providers can speak to thousands of patients without needing make make any manual calls at all.


Links


]]>
Voice first user research with Konstantin Samoylov and Adam Banks Voice first user research with Konstantin Samoylov and Adam Banks Mon, 26 Mar 2018 08:28:12 GMT 1:18:12 5ab8af1d791733c7782ca84d no https://vux.world/voice-first-user-research/ full We’re talking to ex-Googlers, Konstantin Samoylov and Adam Banks, about their findings from conducting research on voice assistants at Google and their approach to building world-leading UX labs.

This episode is a whirlwind of insights, practical advice and engaging anecdotes that cover the width and breadth of user research and user behaviour in the voice first and voice assistant space. It’s littered with examples of user behaviour found when researching voice at Google and peppered with guidance on how to create world-class user research spaces.

Some of the things we discuss include:

  • Findings from countless voice assistant studies at Google
  • Real user behaviour in the on-boarding process
  • User trust of voice assistants
  • What people expect from voice assistants
  • User mental models when using voice assistants
  • The difference between replicating your app and designing for voice
  • The difference between a voice assistant and a voice interface
  • The difference between user expectations and reality
  • How voice assistant responses can shape people’s expectations of the full functionality of the thing
  • What makes a good UX lab
  • How to design a user research space
  • How voice will disrupt and challenge organisational structure
  • Is there a place for advertising on voice assistants?
  • Mistakes people make when seeking a voice presence (Hint: starting with ‘let’s create an Alexa Skill’ rather than ‘how will
  • people interact with our brand via voice?’)
  • The importance (or lack of) of speed in voice user interfaces?
  • How to fit voice user research into a design sprint

Plus, for those of you watching on YouTube, we have a tour of the UX Lab in a Box!


Our Guests

Konstantin Samoylov and Adam Banks are world-leading user researchers and research lab creators, and founders of user research consultancy firm, UX Study.

The duo left Google in 2016 after pioneering studies in virtual assistants and voice, as well as designing and creating over 50 user research labs across the globe, and managing the entirety of Google’s global user research spaces.

While working as researchers and lab builders at Google, and showing companies their research spaces, plenty of companies used to ask Konstantin and Adam whether they can recommend a company to build them a similar lab. Upon realising that company doesn’t exist, they set about creating it!

UX Study designs and builds research and design spaces for companies, provides research consultancy services and training, as well as hires and sells its signature product, UX Lab in a Box.


UX Lab in a Box

The Lab in a Box, http://ux-study.com/products/lab-in-a-box/ is an audio and video recording, mixing and broadcasting unit designed specifically to help user researchers conduct reliable, consistent and speedy studies.

It converts any space into a user research lab in minutes and helps researchers focus on the most important aspect of their role - research!

It was born after the duo, in true researcher style, conducted user research on user researchers and found that 30% of a researchers time is spent fiddling with cables, setting up studies, editing video and generally faffing around doing things that aren’t research!


Konstantin Samoylov

Konstantin Samoylov is an award-winning user researcher. He has nearly 20 years’ experience in the field and has conducted over 1000 user research studies.

He was part of the team that pioneered voice at Google and was the first researcher to focus on voice dialogues and actions. By the time he left, just 2 years ago, most of the studies into user behaviour on voice assistants at Google were conducted by him.


Adam Banks

It’s likely that Adam Banks has more experience in creating user research spaces than anyone else on the planet. He designed, built and managed all of Google’s user research labs globally including the newly-opened ‘Userplex’ in San Francisco.

He’s created over 50 research and design spaces across the globe for Google, and also has vast experience in conducting user research himself.


Links

Visit the UX Study website

Follow UX Study on Twitter

Check out the UX Lab in a Box

Follow Kostantin on Twitter

Follow Adam on Twitter

]]>
We’re talking to ex-Googlers, Konstantin Samoylov and Adam Banks, about their findings from conducting research on voice assistants at Google and their approach to building world-leading UX labs.

This episode is a whirlwind of insights, practical advice and engaging anecdotes that cover the width and breadth of user research and user behaviour in the voice first and voice assistant space. It’s littered with examples of user behaviour found when researching voice at Google and peppered with guidance on how to create world-class user research spaces.

Some of the things we discuss include:

  • Findings from countless voice assistant studies at Google
  • Real user behaviour in the on-boarding process
  • User trust of voice assistants
  • What people expect from voice assistants
  • User mental models when using voice assistants
  • The difference between replicating your app and designing for voice
  • The difference between a voice assistant and a voice interface
  • The difference between user expectations and reality
  • How voice assistant responses can shape people’s expectations of the full functionality of the thing
  • What makes a good UX lab
  • How to design a user research space
  • How voice will disrupt and challenge organisational structure
  • Is there a place for advertising on voice assistants?
  • Mistakes people make when seeking a voice presence (Hint: starting with ‘let’s create an Alexa Skill’ rather than ‘how will
  • people interact with our brand via voice?’)
  • The importance (or lack of) of speed in voice user interfaces?
  • How to fit voice user research into a design sprint

Plus, for those of you watching on YouTube, we have a tour of the UX Lab in a Box!


Our Guests

Konstantin Samoylov and Adam Banks are world-leading user researchers and research lab creators, and founders of user research consultancy firm, UX Study.

The duo left Google in 2016 after pioneering studies in virtual assistants and voice, as well as designing and creating over 50 user research labs across the globe, and managing the entirety of Google’s global user research spaces.

While working as researchers and lab builders at Google, and showing companies their research spaces, plenty of companies used to ask Konstantin and Adam whether they can recommend a company to build them a similar lab. Upon realising that company doesn’t exist, they set about creating it!

UX Study designs and builds research and design spaces for companies, provides research consultancy services and training, as well as hires and sells its signature product, UX Lab in a Box.


UX Lab in a Box

The Lab in a Box, http://ux-study.com/products/lab-in-a-box/ is an audio and video recording, mixing and broadcasting unit designed specifically to help user researchers conduct reliable, consistent and speedy studies.

It converts any space into a user research lab in minutes and helps researchers focus on the most important aspect of their role - research!

It was born after the duo, in true researcher style, conducted user research on user researchers and found that 30% of a researchers time is spent fiddling with cables, setting up studies, editing video and generally faffing around doing things that aren’t research!


Konstantin Samoylov

Konstantin Samoylov is an award-winning user researcher. He has nearly 20 years’ experience in the field and has conducted over 1000 user research studies.

He was part of the team that pioneered voice at Google and was the first researcher to focus on voice dialogues and actions. By the time he left, just 2 years ago, most of the studies into user behaviour on voice assistants at Google were conducted by him.


Adam Banks

It’s likely that Adam Banks has more experience in creating user research spaces than anyone else on the planet. He designed, built and managed all of Google’s user research labs globally including the newly-opened ‘Userplex’ in San Francisco.

He’s created over 50 research and design spaces across the globe for Google, and also has vast experience in conducting user research himself.


Links

Visit the UX Study website

Follow UX Study on Twitter

Check out the UX Lab in a Box

Follow Kostantin on Twitter

Follow Adam on Twitter

]]>
Hearing voices: a strategic view of the voice space with Matt Hartman Hearing voices: a strategic view of the voice space with Matt Hartman Mon, 19 Mar 2018 05:00:00 GMT 48:28 5aab80b1f1b0453a67c92e21 no https://vux.world/hearing-voices full This week, Dustin and I are joined by Matt Hartman, partner at Betaworks, curator of the Hearing Voices newsletter and creator of the Wiffy Alexa Skill.


In this episode, we’re discussing:


  • All about Betaworks
  • A strategic vision for voice
  • Changing user behaviour
  • On-demand interfaces
  • Friction and psychological friction
  • How context influences your design interface
  • The 2 types of companies that’ll get built on voice platforms
  • Differences between GUI and VUI design
  • Voice camp
  • The Wiffy Alexa Skill
  • Lessons learned building your first Alexa Skill
  • Text message on-boarding
  • Challenges in the voice space


Our Guest, Matt Hartman

Matt Hartman has been with Betaworks for the past 4 years and handles the investment side of the company. Matt spends his days with his ear to the ground, meeting company founders and entrepreneurs, searching for the next big investment opportunities.


Paying attention to trends in user behaviour and searching for the next new wave of technology that will change the way people communicate has led Matt and Betaworks to focus on the voice space.


Matt has developed immense knowledge and passion for voice and is a true visionary. He totally gets the current state of play in the voice space and is a true design thinker. He has an entirely different and unique perspective on the voice scene: the voice ecosystem, voice strategy, user behaviour trends, challenges and the future of the industry.


Matt curates the Hearing Voices newsletter to share his reading with the rest of the voice space and created the Wiffy Alexa Skill, which lets you ask Alexa for the Wifi password. It’s one of the few Skills that receives the fabled Alexa Developer Reward.


Betaworks

Betaworks is a startup platform that builds products like bit.ly, Chartbeat and GIPHY. It invests in companies like Tumblr, Kickstarter and Medium and has recently turned its attention to audio and voice platforms such as Anchor, Breaker and Gimlet.


As part of voice camp in 2017, Betaworks invested in a host of voice-first companies including Jovo, who featured on episode 5 of the VUX World podcast, as well as Spoken Layer, Shine and John Done, which conversational AI guru, Jeff Smith (episode 4), was involved in.


Links


]]>
This week, Dustin and I are joined by Matt Hartman, partner at Betaworks, curator of the Hearing Voices newsletter and creator of the Wiffy Alexa Skill.


In this episode, we’re discussing:


  • All about Betaworks
  • A strategic vision for voice
  • Changing user behaviour
  • On-demand interfaces
  • Friction and psychological friction
  • How context influences your design interface
  • The 2 types of companies that’ll get built on voice platforms
  • Differences between GUI and VUI design
  • Voice camp
  • The Wiffy Alexa Skill
  • Lessons learned building your first Alexa Skill
  • Text message on-boarding
  • Challenges in the voice space


Our Guest, Matt Hartman

Matt Hartman has been with Betaworks for the past 4 years and handles the investment side of the company. Matt spends his days with his ear to the ground, meeting company founders and entrepreneurs, searching for the next big investment opportunities.


Paying attention to trends in user behaviour and searching for the next new wave of technology that will change the way people communicate has led Matt and Betaworks to focus on the voice space.


Matt has developed immense knowledge and passion for voice and is a true visionary. He totally gets the current state of play in the voice space and is a true design thinker. He has an entirely different and unique perspective on the voice scene: the voice ecosystem, voice strategy, user behaviour trends, challenges and the future of the industry.


Matt curates the Hearing Voices newsletter to share his reading with the rest of the voice space and created the Wiffy Alexa Skill, which lets you ask Alexa for the Wifi password. It’s one of the few Skills that receives the fabled Alexa Developer Reward.


Betaworks

Betaworks is a startup platform that builds products like bit.ly, Chartbeat and GIPHY. It invests in companies like Tumblr, Kickstarter and Medium and has recently turned its attention to audio and voice platforms such as Anchor, Breaker and Gimlet.


As part of voice camp in 2017, Betaworks invested in a host of voice-first companies including Jovo, who featured on episode 5 of the VUX World podcast, as well as Spoken Layer, Shine and John Done, which conversational AI guru, Jeff Smith (episode 4), was involved in.


Links


]]>
All about Mycroft with Joshua Montgomery, Steve Penrod and Derick Schweppe All about Mycroft with Joshua Montgomery, Steve Penrod and Derick Schweppe Mon, 12 Mar 2018 05:29:00 GMT 1:20:00 5aa46b670185f54f5332bca3 no https://vux.world/all-about-mycroft full This week, we’re joined by the Mycroft AI team, and we’re getting deep into designing and developing on the open source alternative to Amazon Alexa and Google Assistant.

If you’ve tried creating voice apps on platforms such as Amazon Alexa and Google Assistant, then you’ll do doubt be familiar with their current limitations. Push notifications, monetisation and all-round flexibility generally leave plenty to be desired.

What if there was an alternative? A platform that really did let you create whatever you wanted. Something that'll let you monetise. Something completely open to being used in a way that you want to use it.

Well, that’s what the team at Mycroft AI have built.




What is Mycroft AI?

Mycroft AI is the world’s first open source voice assistant that runs anywhere. On desktop, mobile, smart speakers. In cars, fridges, and washing machines. You name it. You can put it where you like and do with it what you like as well.

One member of the Mycroft community has hooked the platform up to a webcam and created a facial recognition feature that uses a persons face instead of a wake word. When you look at the camera, the speaker wakes and is ready for you to speak to it!

As well as being open source and flexible, if you create something exceptional, then it could even become the default skill for that feature on the platform. That’s like you creating a really great weather skill on Alexa and Amazon using that as the default way to tell people the weather!

Plus, your personal data is kept totally private.

And Mycroft aren’t just creating cool software, they have a range of smart speakers as well. The Mark I speaker is on sale now and the Mark II is on Indiegogo right now.




Our Guests

Today, we’re joined by Joshua Montgomery, CEO; Steve Penrod, CTO; and Derick Schweppe, CDO to talk all things Mycroft AI.

We’re also joined again by co-host, Dustin Coates, and we’re getting into detail about:

  • Where Mycroft AI came from and the company’s vision for voice and AI
  • The differences between Mycroft and the other players such as Alexa and Google Assistant
  • The value of an open source voice assistant
  • About the platform (how it works, how you can get up and running)
  • About the range of smart speakers
  • Privacy and security
  • The Mycroft community and what people are building
  • Incentives and reasons to develop on Mycroft AI
  • Dev Chops with Dustin: a new feature where Dustin gets into the dev details of the Mycroft platform
  • Voice design techniques and processes
  • The future of voice

Links

]]>
This week, we’re joined by the Mycroft AI team, and we’re getting deep into designing and developing on the open source alternative to Amazon Alexa and Google Assistant.

If you’ve tried creating voice apps on platforms such as Amazon Alexa and Google Assistant, then you’ll do doubt be familiar with their current limitations. Push notifications, monetisation and all-round flexibility generally leave plenty to be desired.

What if there was an alternative? A platform that really did let you create whatever you wanted. Something that'll let you monetise. Something completely open to being used in a way that you want to use it.

Well, that’s what the team at Mycroft AI have built.




What is Mycroft AI?

Mycroft AI is the world’s first open source voice assistant that runs anywhere. On desktop, mobile, smart speakers. In cars, fridges, and washing machines. You name it. You can put it where you like and do with it what you like as well.

One member of the Mycroft community has hooked the platform up to a webcam and created a facial recognition feature that uses a persons face instead of a wake word. When you look at the camera, the speaker wakes and is ready for you to speak to it!

As well as being open source and flexible, if you create something exceptional, then it could even become the default skill for that feature on the platform. That’s like you creating a really great weather skill on Alexa and Amazon using that as the default way to tell people the weather!

Plus, your personal data is kept totally private.

And Mycroft aren’t just creating cool software, they have a range of smart speakers as well. The Mark I speaker is on sale now and the Mark II is on Indiegogo right now.




Our Guests

Today, we’re joined by Joshua Montgomery, CEO; Steve Penrod, CTO; and Derick Schweppe, CDO to talk all things Mycroft AI.

We’re also joined again by co-host, Dustin Coates, and we’re getting into detail about:

  • Where Mycroft AI came from and the company’s vision for voice and AI
  • The differences between Mycroft and the other players such as Alexa and Google Assistant
  • The value of an open source voice assistant
  • About the platform (how it works, how you can get up and running)
  • About the range of smart speakers
  • Privacy and security
  • The Mycroft community and what people are building
  • Incentives and reasons to develop on Mycroft AI
  • Dev Chops with Dustin: a new feature where Dustin gets into the dev details of the Mycroft platform
  • Voice design techniques and processes
  • The future of voice

Links

]]>
How to create an Alexa Skill without coding with Vasili Shynkarenka How to create an Alexa Skill without coding with Vasili Shynkarenka Mon, 05 Mar 2018 06:33:07 GMT 1:06:07 5a9ce4a45fc658720a3ccc9d no https://vux.world/create-alexa-skills-without-coding full But first, let's welcome co-host; Dustin Coates

We're joined in this episode by our new co-host, Dustin Coates. Dustin is the author of Voice Applications for Alexa and Google Assistant and has been involved in the voice scene since day 1. With extensive experience in software engineering, deep knowledge of Alexa and Google Assistant development and an immense passion for voice, Dustin brings a new perspective and different angles of questioning that, not only technical folk, but non-tech people will appreciate as well.


One of the challenges with new technology platforms is that you typically need to be able to speak the lingo to develop on them. As the internet has progressed, there are what seems like a million dev languages that you'd need to be able to code in to be able to create your website or app.


It wasn’t until relatively recently that tools cropped up to allow designers and total beginners to build on the web. Tools like Wordpress, Weebly and Squarespace have made it easy for anyone to create a presence online.


The great thing about having that history of the web is that we can learn from the past and apply the things that work well to new industries and technology. That’s exactly what Vasili has done through the creation of Storyline. It's the Weebly of voice.


It has a drag and drop interface and a user friendly workflow that will allow anyone to create an Alexa Skill without needing to code a single line.


It will let more technical folk do further work if they’d like to, such as using an API integration to interrogate data, but for the less technical folk out there, what you get ‘out the box’ is more than enough to build a well-rounded Skill.


In fact, testament to how much flexibility is baked into the tool is the recent announcement of the Amazon Alexa Skills Challenge: Kids winner, Kids Court, was created in Storyline.


In this episode, we get into detail about:

  • What Storyline is, how it works and how to get up and running
  • Testing and publishing Skills
  • How to make your Skill more discoverable
  • The Storyline community
  • Future features and the roadmap
  • The challenges facing developers and solutions to solving them
  • Vasili’s vision for where the voice space is heading
  • Advice for beginner Skill-builders and voice heads




Our guest

Vasili Shynkarenka is the founder and CEO of Storyline. After creating and selling an agency that specialised in creating conversational experiences for brands, Vasili turned his attention to focus on Storyline.


Vasili is madly passionate about voice and has immense experience in the field. He’s super-keen for all kinds of people to get involved in creating voice experiences, no matter what their skill level. His vision for the future of smart speakers and his knowledge of creating voice experiences are inspirational.


This episode is packed with insights and tips and tricks to help people of all skill levels create an Alexa Skill.




Links

]]>
But first, let's welcome co-host; Dustin Coates

We're joined in this episode by our new co-host, Dustin Coates. Dustin is the author of Voice Applications for Alexa and Google Assistant and has been involved in the voice scene since day 1. With extensive experience in software engineering, deep knowledge of Alexa and Google Assistant development and an immense passion for voice, Dustin brings a new perspective and different angles of questioning that, not only technical folk, but non-tech people will appreciate as well.


One of the challenges with new technology platforms is that you typically need to be able to speak the lingo to develop on them. As the internet has progressed, there are what seems like a million dev languages that you'd need to be able to code in to be able to create your website or app.


It wasn’t until relatively recently that tools cropped up to allow designers and total beginners to build on the web. Tools like Wordpress, Weebly and Squarespace have made it easy for anyone to create a presence online.


The great thing about having that history of the web is that we can learn from the past and apply the things that work well to new industries and technology. That’s exactly what Vasili has done through the creation of Storyline. It's the Weebly of voice.


It has a drag and drop interface and a user friendly workflow that will allow anyone to create an Alexa Skill without needing to code a single line.


It will let more technical folk do further work if they’d like to, such as using an API integration to interrogate data, but for the less technical folk out there, what you get ‘out the box’ is more than enough to build a well-rounded Skill.


In fact, testament to how much flexibility is baked into the tool is the recent announcement of the Amazon Alexa Skills Challenge: Kids winner, Kids Court, was created in Storyline.


In this episode, we get into detail about:

  • What Storyline is, how it works and how to get up and running
  • Testing and publishing Skills
  • How to make your Skill more discoverable
  • The Storyline community
  • Future features and the roadmap
  • The challenges facing developers and solutions to solving them
  • Vasili’s vision for where the voice space is heading
  • Advice for beginner Skill-builders and voice heads




Our guest

Vasili Shynkarenka is the founder and CEO of Storyline. After creating and selling an agency that specialised in creating conversational experiences for brands, Vasili turned his attention to focus on Storyline.


Vasili is madly passionate about voice and has immense experience in the field. He’s super-keen for all kinds of people to get involved in creating voice experiences, no matter what their skill level. His vision for the future of smart speakers and his knowledge of creating voice experiences are inspirational.


This episode is packed with insights and tips and tricks to help people of all skill levels create an Alexa Skill.




Links

]]>
Cross-platform voice development with Jan König Cross-platform voice development with Jan König Mon, 26 Feb 2018 08:55:57 GMT 57:27 5a93cb9da5f5bf0c738a8e38 no https://vux.world/cross-platform-voice-development How to create Alexa Skills and Google Assistant apps using the same code! full 5 Find out all about the Jovo framework that lets you create Alexa Skills and Google Assistant apps at the same time, using the same code!


You know how you always need to write platform-specific code for everything? One lot of code for your iOS app, another load for Android and more for Windows (if you even bother). Well, the same challenges exist today when creating voice apps. Well, those challenges did exist, until Jovo came along.


With the Jovo framework, you can create an Alexa Skill and a Google Assistant app all from the same lot of code. It's part of Jovo's bigger mission to enable you to create multi-modal experiences with ease and to join together the sporadic tech outlets to create a unified experience across all devices and platforms.


Our Guest

Jan König is one of the co-founders of Jovo and we're speaking to him today about all things cross-platform voice development. We'll hear from Jan about things like:

  • what 'multi-modal' actually means
  • features of the Jovo framework
  • the Jovo community and Jovo Studios
  • the differences between developing for Alexa and Google Assistant
  • the challenges of developing voice experiences
  • the skills needed for building Skills
  • designer and developer relationships in the voice world
  • testing voice apps
  • Jovo 1.0 and the future of Jovo and


Links

]]>
Find out all about the Jovo framework that lets you create Alexa Skills and Google Assistant apps at the same time, using the same code!


You know how you always need to write platform-specific code for everything? One lot of code for your iOS app, another load for Android and more for Windows (if you even bother). Well, the same challenges exist today when creating voice apps. Well, those challenges did exist, until Jovo came along.


With the Jovo framework, you can create an Alexa Skill and a Google Assistant app all from the same lot of code. It's part of Jovo's bigger mission to enable you to create multi-modal experiences with ease and to join together the sporadic tech outlets to create a unified experience across all devices and platforms.


Our Guest

Jan König is one of the co-founders of Jovo and we're speaking to him today about all things cross-platform voice development. We'll hear from Jan about things like:

  • what 'multi-modal' actually means
  • features of the Jovo framework
  • the Jovo community and Jovo Studios
  • the differences between developing for Alexa and Google Assistant
  • the challenges of developing voice experiences
  • the skills needed for building Skills
  • designer and developer relationships in the voice world
  • testing voice apps
  • Jovo 1.0 and the future of Jovo and


Links

]]>
All about conversational AI with Jeff Smith All about conversational AI with Jeff Smith Mon, 19 Feb 2018 05:17:44 GMT 1:06:55 5a8a5df81f6a2f4d6b91e83c no https://vux.world/conversational-ai-jeff-smith An introduction to one of the hottest topics in voice full 4 Conversational AI crops up constantly in conversations about voice, but what actually is it? How the heck does it work? And how can you use it? We speak to Jeff Smith to find out.


In this episode, we cover:


  • An overview of conversational AI - what it is and how it works
  • The role of voice in conversational AI
  • How and why brands should consider using it
  • How you can get started with machine learning and conversational AI
  • Challenges and opportunities such as the state of analytics and security


At the foot of the show, I said that this was:


“One of the most interesting conversations I’ve ever had in my life.”


And I wasn’t lying.


Getting to grips with Conversational AI

If you’re not familiar with the concepts of conversational AI, this episode will give you a great introduction.

If you are familiar and work in the industry, Jeff drops some great nuggets and learnings from his extensive experience.


And if you’re interested in this from a branding perspective, by the end of this episode, you’ll have a full understanding of the contexts and environments where it’s useful.


Our Guest

Jeff Smith, author of Reactive Machine Learning Systems, has bags of experience in the area of machine learning and conversational AI. He’s built a series of AIs, including Amy and Andrew at X.ai (what a cool domain!). That’s an AI Personal Assistant that helps people schedule meetings.


Jeff now works with IPsoft and manages the conversational AI team who’re building Amelia. Amelia, as you’ll find out in the show, is an extremely sophisticated AI that can perform many human tasks, increasing productivity and business efficiencies.


Links


]]>
Conversational AI crops up constantly in conversations about voice, but what actually is it? How the heck does it work? And how can you use it? We speak to Jeff Smith to find out.


In this episode, we cover:


  • An overview of conversational AI - what it is and how it works
  • The role of voice in conversational AI
  • How and why brands should consider using it
  • How you can get started with machine learning and conversational AI
  • Challenges and opportunities such as the state of analytics and security


At the foot of the show, I said that this was:


“One of the most interesting conversations I’ve ever had in my life.”


And I wasn’t lying.


Getting to grips with Conversational AI

If you’re not familiar with the concepts of conversational AI, this episode will give you a great introduction.

If you are familiar and work in the industry, Jeff drops some great nuggets and learnings from his extensive experience.


And if you’re interested in this from a branding perspective, by the end of this episode, you’ll have a full understanding of the contexts and environments where it’s useful.


Our Guest

Jeff Smith, author of Reactive Machine Learning Systems, has bags of experience in the area of machine learning and conversational AI. He’s built a series of AIs, including Amy and Andrew at X.ai (what a cool domain!). That’s an AI Personal Assistant that helps people schedule meetings.


Jeff now works with IPsoft and manages the conversational AI team who’re building Amelia. Amelia, as you’ll find out in the show, is an extremely sophisticated AI that can perform many human tasks, increasing productivity and business efficiencies.


Links


]]>
How to build an Alexa Skill in Wordpress with Tom Harrigan How to build an Alexa Skill in Wordpress with Tom Harrigan Mon, 12 Feb 2018 09:15:22 GMT 42:55 5a8158b623647cb01856ca0e no https://vux.world/build-an-alexa-skill-in-wordpress Tom Harrigan talks us through VoiceWP, his Wordpress plugin that lets you create Alexa Skills right from within Wordpress. full 3 In this episode, we’re going to show you how you can build an Alexa Skill from right within Wordpress.

Wordpress powers almost a third of the internet and now millions of websites running Wordpress can all have a presence on voice. It’s all thanks to VoiceWP, the Wordpress plugin that lets you build an Alexa Skill from within the most widely adopted CMS on the planet.


You can create Flash Briefings with ease and even have Alexa read the content of your website. We all know about audio books, but this could be the first opportunity to have your website content turned into audio form and read aloud as soon as its published, without you having to go through much effort at all. It’s super simple to set up.




Our Guest


VoiceWP was built by our guest, Tom Harrigan, Partner and VP of Strategic Technology at Alley Interactive, a full service digital agency that specialises in helping publishers succeed online. We speak to Tom about VoiceWP, which is allowing brands such as People.com and Dow Jones’ Moneyish.com build Alexa Skills and establish a presence on voice with ease.


And you can use it too, because it’s free and super-simple to set up.


So, if you use Wordpress as your CMS and you’re interested in testing the waters in voice, or if you’re looking for a starting point Alexa Skill building, then this episode is for you.


We’re speaking to Tom about:


  • Where the idea for VoiceWP came from and how it was built
  • What is the plugin all about and what features does it have
  • Who’s using it right now and who is it targeted at
  • How can you get up and running with the plugin and try it out for yourself
  • What does the future look like and what’s coming up




Links

]]>
In this episode, we’re going to show you how you can build an Alexa Skill from right within Wordpress.

Wordpress powers almost a third of the internet and now millions of websites running Wordpress can all have a presence on voice. It’s all thanks to VoiceWP, the Wordpress plugin that lets you build an Alexa Skill from within the most widely adopted CMS on the planet.


You can create Flash Briefings with ease and even have Alexa read the content of your website. We all know about audio books, but this could be the first opportunity to have your website content turned into audio form and read aloud as soon as its published, without you having to go through much effort at all. It’s super simple to set up.




Our Guest


VoiceWP was built by our guest, Tom Harrigan, Partner and VP of Strategic Technology at Alley Interactive, a full service digital agency that specialises in helping publishers succeed online. We speak to Tom about VoiceWP, which is allowing brands such as People.com and Dow Jones’ Moneyish.com build Alexa Skills and establish a presence on voice with ease.


And you can use it too, because it’s free and super-simple to set up.


So, if you use Wordpress as your CMS and you’re interested in testing the waters in voice, or if you’re looking for a starting point Alexa Skill building, then this episode is for you.


We’re speaking to Tom about:


  • Where the idea for VoiceWP came from and how it was built
  • What is the plugin all about and what features does it have
  • Who’s using it right now and who is it targeted at
  • How can you get up and running with the plugin and try it out for yourself
  • What does the future look like and what’s coming up




Links

]]>
Voice-first user testing with Sam Howard Voice-first user testing with Sam Howard Mon, 12 Feb 2018 08:54:15 GMT 53:06 5a8154beac34577e1adf2626 no https://vux.world/voice-first-user-testing Sam Howard of Userfy takes us through the ins and outs of usability testing on voice first devices. full 2 In this episode, we're talking about voice first user testing, why it's so imperative and how you can get started doing your own voice user testing.


Why voice first user testing?


Although usability testing graphical user interfaces is as common as a trending tweet, it's a seed that’s yet to be greatly sewn in the world of voice. There are many services that will provide technical testing, but those specifically offering voice first user testing in person with real users are few and far between. Enter, Userfy.


Whether you create Alexa Skills, Google Actions or any other voice user experience, this episode will help you make sure that your voice user interface (VUI) works for the people that use it through teaching you how to approach a voice-based user testing project.


We’ll cover things like:


  • The current state of user research in the voice industry
  • Why is usability testing important?
  • What kind of users should you test with?
  • User testing processes and planning
  • How to approach a voice-first testing project
  • Validating assumptions
  • The difference between graphical and voice user testing
  • What tools and equipment you need


Introducing Sam Howard


Our guest is Sam Howard, co-founder and Director of user research agency, Userfy, which specialises in user testing. Sam has a PhD in Human-Computer Interaction and a degree in Psychology. That, mixed with a love of technology and a passion for helping people, puts Sam at the forefront of the user research field.


Links:

Sam Howard on Twitter

Userfy website

Userfy on Twitter

Sam's 'Usability challenges facing voice-first devices' article

]]>
In this episode, we're talking about voice first user testing, why it's so imperative and how you can get started doing your own voice user testing.


Why voice first user testing?


Although usability testing graphical user interfaces is as common as a trending tweet, it's a seed that’s yet to be greatly sewn in the world of voice. There are many services that will provide technical testing, but those specifically offering voice first user testing in person with real users are few and far between. Enter, Userfy.


Whether you create Alexa Skills, Google Actions or any other voice user experience, this episode will help you make sure that your voice user interface (VUI) works for the people that use it through teaching you how to approach a voice-based user testing project.


We’ll cover things like:


  • The current state of user research in the voice industry
  • Why is usability testing important?
  • What kind of users should you test with?
  • User testing processes and planning
  • How to approach a voice-first testing project
  • Validating assumptions
  • The difference between graphical and voice user testing
  • What tools and equipment you need


Introducing Sam Howard


Our guest is Sam Howard, co-founder and Director of user research agency, Userfy, which specialises in user testing. Sam has a PhD in Human-Computer Interaction and a degree in Psychology. That, mixed with a love of technology and a passion for helping people, puts Sam at the forefront of the user research field.


Links:

Sam Howard on Twitter

Userfy website

Userfy on Twitter

Sam's 'Usability challenges facing voice-first devices' article

]]>
Welcome to VUX World with Kane Simms Welcome to VUX World with Kane Simms Mon, 12 Feb 2018 07:51:08 GMT 16:07 5a81476dac34577e1adf2625 no https://vux.world/welcome-to-vux-world All about what VUX World is all about trailer 1 Ladies and gentlemen, boys and girls, welcome to VUX World.


This introductory episode is all about what VUX World is all about. Here, I'll take you through:


  • the aims of the show
  • how we intend to meet those aims
  • why it exists
  • who would find it useful
  • what's in store over the coming months


The aims of VUX World


This is an ambitious show that intends to cover three core aims for three primary groups of people:


  1. To help VUX pros create better voice experiences through bringing together people from throughout the industry to share insights, tools, tips and tricks
  2. To help brands create voice first strategies and implement voice first solutions through learning from companies and agencies who're doing it right now
  3. To help grow the VUX industry by introducing people such as creatives, scientists, technologists, strategists, linguistics, developers and anyone else to the voice first world


How we'll meet our aims


We'll reach those aims through focusing on three core pillars of content.


  • Why? We'll cover the 'why' aspect of the argument for voice. Why should you take this area seriously? Why develop your skills here? Why voice?
  • How? We'll extensively cover the 'how' side of things, too. How can you get started? How does the voice industry work? How can you develop here? We'll cover things like tutorials, guides, tips, hints and tactics to help you learn, develop and grow to create epic voice experiences.
  • What's stopping you? Every industry has its challenges. We want to delve into those challenges and uncover opportunities to push past the barriers and find opportunities to move forward.


The host of VUX World


Your host for this journey is me, Kane Simms. I have a history in sound design and music production as well as extensive experience in marketing, UX and agile project management. My love for all things audio and passion for understanding user behaviour and technology culminate perfectly right here in the world of voice.

So, strap in, hold tight and brace yourself for the rapidly expanding world of voice. I'm glad to be your guide.

Now, without further ado, you should totally check out the first proper episode of the podcast: User testing on voice-first devices with Sam Howard.


Enjoy :)

]]>
Ladies and gentlemen, boys and girls, welcome to VUX World.


This introductory episode is all about what VUX World is all about. Here, I'll take you through:


  • the aims of the show
  • how we intend to meet those aims
  • why it exists
  • who would find it useful
  • what's in store over the coming months


The aims of VUX World


This is an ambitious show that intends to cover three core aims for three primary groups of people:


  1. To help VUX pros create better voice experiences through bringing together people from throughout the industry to share insights, tools, tips and tricks
  2. To help brands create voice first strategies and implement voice first solutions through learning from companies and agencies who're doing it right now
  3. To help grow the VUX industry by introducing people such as creatives, scientists, technologists, strategists, linguistics, developers and anyone else to the voice first world


How we'll meet our aims


We'll reach those aims through focusing on three core pillars of content.


  • Why? We'll cover the 'why' aspect of the argument for voice. Why should you take this area seriously? Why develop your skills here? Why voice?
  • How? We'll extensively cover the 'how' side of things, too. How can you get started? How does the voice industry work? How can you develop here? We'll cover things like tutorials, guides, tips, hints and tactics to help you learn, develop and grow to create epic voice experiences.
  • What's stopping you? Every industry has its challenges. We want to delve into those challenges and uncover opportunities to push past the barriers and find opportunities to move forward.


The host of VUX World


Your host for this journey is me, Kane Simms. I have a history in sound design and music production as well as extensive experience in marketing, UX and agile project management. My love for all things audio and passion for understanding user behaviour and technology culminate perfectly right here in the world of voice.

So, strap in, hold tight and brace yourself for the rapidly expanding world of voice. I'm glad to be your guide.

Now, without further ado, you should totally check out the first proper episode of the podcast: User testing on voice-first devices with Sam Howard.


Enjoy :)

]]>