AIT GameTheory Archives - AiThority https://aithority.com/category/interviews/ait-gametheory/ Artificial Intelligence | News | Insights | AiThority Tue, 15 Feb 2022 13:42:35 +0000 en-US hourly 1 https://wordpress.org/?v=6.4.2 https://aithority.com/wp-content/uploads/2023/09/cropped-0-2951_aithority-logo-hd-png-download-removebg-preview-32x32.png AIT GameTheory Archives - AiThority https://aithority.com/category/interviews/ait-gametheory/ 32 32 AiThority Interview with Patricia Nagle, President, Americas at TeamViewer https://aithority.com/saas/aithority-interview-with-patricia-nagle-president-americas-at-teamviewer/ Tue, 15 Feb 2022 13:42:35 +0000 https://aithority.com/?p=379027 AiThority Interview with Patricia Nagle, President, Americas at TeamViewer

The post AiThority Interview with Patricia Nagle, President, Americas at TeamViewer appeared first on AiThority.

]]>
AiThority Interview with Patricia Nagle, President, Americas at TeamViewer

Hi Patricia, please tell us about your journey into the technology space and how you started at TeamViewer?

My career has spanned comprehensive sales, business development and go-to-market demand generation in the enterprise software, subscription and professional services sectors. I’ve always had a strong focus on building an accretive strategic partner ecosystem.

Prior to joining TeamViewer, I spent more than 13 years at Canadian publicly listed software giant OpenText, where I led corporate marketing and global business development. I also managed global strategic alliances with key partners that included SAP, Google, AWS, Microsoft and Salesforce that delivered substantial revenue contributions. Prior to OpenText, I worked in various sales, marketing and operations roles at leading global software and consulting companies. Today, I am President for the Americas at TeamViewer, where my responsibilities encompass sales, channel partnerships, the development of new routes to market and customer success.

If 2021 was about remote working and hybrid working, what’s in store for 2022? What are the new types of working culture that organizations are exploring?

Hybrid work is here to stay, and it will have dramatic effects on the enterprise cultures and priorities. A major focus for 2022 will be finding the right balance between technology and people power to overcome challenges in a way that is cost-efficient and aligned with company goals, strategies and, most importantly, culture. Some questions that leaders will be asking are:

  • Is this technology mature enough to provide stable benefits to my company?
  • Is this problem better addressed using technology or people?
  • In what key areas can technology best augment current processes?
  • How will this technology impact the way our people interact with each other and clients?

Read More: AiThority Interview with Yashar Behzadi, CEO and Founder at Synthesis AI

How can the companies smartly reduce the Shadow IT effect?

The lack of strategic purchasing during the transition to hybrid work has left many enterprises’ networks in disarray, as individuals within companies regularly took it upon themselves to download the tools and software they needed for remote working during the height of the pandemic.

A priority for 2022 will be reigning in these out-of-network services to ensure the security and stability of companies’ IT infrastructures. 2021 was a year of expansion and testing; 2022 will be a year of consolidation and forward planning. The first step in this process is assessing all assets and integrations on a given network to understand where redundancies and deadweight lie. From there, companies can begin to identify where to incorporate more efficient solutions. Employees at all levels should be a part of this discussion, as their tendency to download extraneous tools is where this all began. If their needs aren’t met, you can expect to be facing this same issue at the end of 2022.

What’s the future of enterprise collaboration tools and platforms? Why should organizations invest intelligently in these tools, and why?

One can expect companies to capitalize on the ‘one-to-many multiplier’ of collaboration technologies. One example of this opportunity can be seen in employee onboarding and training. Companies have historically relied on senior personnel within a company to onboard new employees. The downsides of this approach become apparent when a leader is remote or leaves without passing on their institutional knowledge. Emerging AR and AI solutions will enable companies to archive and reformat critical information so that employees can benefit from accessible enterprise expertise during onboarding, training, and troubleshooting. Notably, the benefits of such technology use will only compound over time as more users contribute their own techniques and knowledge.

Collaboration technologies will also act as a force multiplier for front-line workers. These workers managed to carry the economy through the pandemic, yet modernization of many front-line industries was necessary long before the emergence of COVID-19. By pairing wearables like smart glasses with AI, IoT and 5G, companies can keep their workers connected, informed and safe. Leveraging technologies for these tasks will free employees to focus on other priorities and accomplish more in less time. This already drove significant productivity in 2021 and will continue to do so in 2022.

Read More: AiThority Interview with Will Keggin, Head of TV at LiveRamp

What’s AR/VR’s role in building advanced collaboration engagements with employees, customers and community partners?

Outside of the goods/services they sell, companies need two things to succeed: people and information. The pandemic has disrupted both of these in so many ways that the disruption of one now feeds the disruption of the other. In order to stop this vicious cycle, companies should leverage technologies such as AR and AI to bring the people and information together more efficiently. Doing so will ease the pressure on employees while making better use of the information available. Retail is a prime example of how this will play out, with retail workers leveraging smart glasses to help a customer know whether something is in stock, where it is within the store, and any other product information needed to lead the customer to a satisfying purchase experience. By streamlining information flows, companies can be more informed, agile and effective.

What are your thoughts on leveraging AI, analytics and automation for remote team management?

These technologies are essential for any modern enterprise. Companies juggling the countless tools and services developed for hybrid workers want to narrow in on the solutions that are resource-efficient and easy to use. We will continue to see new innovations reach the market, but the provision of these solutions will be a key differentiator in 2022. An example of where this is headed can be seen in remote agents that sit independently from the device. By leveraging cloud and 5G technologies, these tools can be deployed where needed without weighing on hardware and network resources. These tools also have a distinct security advantage, which is paramount as cyberattacks continue to pose a threat across industry sectors.

Read More: Predictions Series 2022: Women Leaders in AI Would Rise through the Ranks

Your predictions for the year 2022- what does your product development roadmap look like for the coming year?

In 2021, we began to see the true power of technology in augmenting employee efforts. The pace of this adoption will accelerate in 2022. Companies with a large human element will be the early beneficiaries, but no industry will be immune as key technologies mature and demonstrate their benefits on a larger scale.

The pandemic also continues to change how businesses deal with introducing new technology. There are still many opportunities for software to fill gaps in this hybrid world, and businesses are under pressure to move more quickly to deploy technology for a shorter time to impact. However, companies will likely be more reserved with hardware releases until supply chains calm down. This means we will see services broadening their capabilities across current infrastructures such as 5G and cloud networks.

As for our future products, we’re focused on the fourth major release of our enterprise Augmented Reality (AR) platform, TeamViewer Frontline. Building on the software from the acquisition of Ubimax in 2020 and our own AR development, the latest version now also integrates the technology of AR specialist Upskill and Mixed Reality (MR) pioneer Viscopic, which we acquired in 2021.

We are incredibly pleased to see our own product development and integration efforts of acquisitions in AR space paying off. We are now able to offer organizations the most comprehensive AR solutions platform for a wide range of use cases to drive workplace digitalization even more effectively in all industries and across the entire value chain while leveraging strong partnerships with SAP and Google Cloud.

Thank you, Patricia! That was fun and we hope to see you back on AiThority.com soon.

[To participate in our interview series, please write to us at sghosh@martechseries.com]

Patricia Nagle is President for the Americas at TeamViewer. She has a storied career that spans comprehensive sales, business development and go-to-market demand generation in the enterprise software, subscription and professional services sectors with a strong focus on building an accretive strategic partner ecosystem. She spent more than 13 years at Canadian publicly listed software giant OpenText, where she was responsible for corporate marketing and global business development that included channel sales, OEM and inside sales functions, supporting a community of over 28,000 partners. Moreover, Patricia managed global strategic alliances with key partners that included SAP, Google, AWS, Microsoft and Salesforce that delivered substantial revenue contribution. Prior to OpenText, Patricia worked in various sales, marketing and operations roles at leading global software and consulting companies.

TeamViewer Logo

TeamViewer is a leading global technology company that provides a connectivity platform to remotely access, control, manage, monitor, and repair devices of any kind – from laptops and mobile phones to industrial machines and robots. Although TeamViewer is free of charge for private use, it has more than 625,000 subscribers and enables companies of all sizes and from all industries to digitalize their business-critical processes through seamless connectivity. Against the backdrop of global megatrends like device proliferation, automation and new work, TeamViewer proactively shapes digital transformation and continuously innovates in the fields of Augmented Reality, Internet of Things and Artificial Intelligence. Since the company’s foundation in 2005, TeamViewer’s software has been installed on more than 2.5 billion devices around the world. The company is headquartered in Goppingen, Germany, and employs around 1,500 people globally.

The post AiThority Interview with Patricia Nagle, President, Americas at TeamViewer appeared first on AiThority.

]]>
AiThority Interview Series With Oliver Tavakoli, CTO at Vectra Networks https://aithority.com/interviews/ait-gametheory/interview-with-oliver-tavakoli-cto-at-vectra-networks/ https://aithority.com/interviews/ait-gametheory/interview-with-oliver-tavakoli-cto-at-vectra-networks/#comments Thu, 26 Jul 2018 12:23:51 +0000 http://melted-cable.flywheelsites.com/?p=13877 Oliver Tavakoli

The post AiThority Interview Series With Oliver Tavakoli, CTO at Vectra Networks appeared first on AiThority.

]]>
Oliver Tavakoli

Tell us about your role at Vectra and the team/technology you handle.

My role at Vectra is to guide strategy, come up with rough concepts based on that strategy and help turn rough concepts into actionable plans. That generally involves talking to security research (to form ideas), to customers (to pressure test the ideas), to data scientists and developers (to judge feasibility of building tech), and to user experience designers (to ensure the idea can be easily understood by end users).

What is the current state of IDPS technology in 2018?

IDPS technology is at something of a crossroads as legacy/signature IDPS has reached a dead end.

The IPS (without a “D”) use case has been annexed into the Enterprise Network Firewall market as all these firewalls include an IPS engine and already sit inline.

There is nearly universal consensus that the IDS (without a “P”) use case is poorly served by signature technology and that the future is about broader IDS coverage through the use of behavioral models. These behavioral models can clearly benefit from the application of machine learning and AI techniques.

Tell us more about Cognito and the AI-engine driving it?

Cognito has been constructed from the ground up with the single-minded goal of finding advanced cyber-attackers who have already established some foothold inside an organization’s network. To do this, Cognito uses both supervised and unsupervised machine learning approaches to detect cyber-attacker behavior rather than trying to recognize the exact tools that an attacker may employ at a point-in-time.

We collect a large set of metadata from organizations’ networks and augment it with key information from their logs to produce a unique dataset that gives insight into almost all attacker behaviors which utilize the network to accomplish a goal.

Where do you see the IDPS market moving between 2018-2020?

The IDPS market will continue along the trajectory of the past couple of years.

By 2020, we believe 70% of IPS use cases will be served by enterprise firewalls and the majority of the standalone IPS placements will be cloud-based (public or private). This will be the case even as the market for enterprise firewalls transforms based on micro-segmentation and becomes highly virtualized to meet cloud requirements.

The IDS use case will evolve to rely much more heavily on behavioral models – both ones are written in code and ones trained using machine learning and will utilize far fewer signatures.

Furthermore, the notion of a “network” IDS will blur as cloud and advanced attack use cases will force an IDS to inspect key cloud and authentication logs in addition to network traffic.

What are the major challenges to GDPR compliance? How do you prepare for it and offer technology for customers?

GDPR compliance requires companies to be acutely aware of whatever information they are gathering that is of personally identifiable, to protect this data with diligence and to promptly report any leak of the information. There have been compliance mandates before – PCI is a global regulation, HIPAA is a US healthcare related one – and these mandates give us a bit of a sense of how hard it will be to adopt new policies and procedures to come into compliance with GDPR. But unlike PCI and HIPAA, GDPR affects almost all companies and usually affects a much broader swath of their operations.

We try to help customers with their GDPR compliance by providing visibility into actions involving the assets that hold PII and alerting them of anything that looks like attacker behavior in the vicinity of these assets.

Cybersecurity is a field suffering from a staggering talent shortage. How can AI, and Vectra in particular, help solve the cyber skills gap?

The talent shortage is certainly real. Companies – particularly ones without deep pockets – are having trouble attracting and retaining cybersecurity talent. This often makes companies want to rely on managed-security-service-providers (MSSPs), but that just transfers the issue to the MSSPs, who have much the same problem hiring security architects and analysts.

Once we acknowledge the fact that, for the foreseeable future, this talent gap is the reality, AI can play a role in helping cover for some of the gaps. Taking Cognito as one example, we not only flag attacker behavior but also correlate the collection of behaviors we see over time, thereby removing time-consuming work and preparing as clear a storyline as possible for the security analyst. The analyst will still have to apply judgment, but the judgment can be applied to a well-crafted narrative rather than disjoint individual signals.

Would Chief Data Officers and Privacy Officers become ubiquitous positions for all companies to fulfill? What would be the role of CTOs in this disruptive ecosystem?

It’s hard to know precisely how companies will handle this new age of sophisticated cyber security attacks and stricter privacy protection mandates. We are certainly seeing a variety of job titles out there and also a variety of reporting relationships.

The title is not as important as the reporting relationship – when data/privacy officers start reporting to CEOs and spending time with boards-of-directors, we will know that the gravity of the situation has sunk in. I expect that CTOs will continue to provide deep technical expertise in service of many aspects of the business – including the cybersecurity and data privacy missions.

Anything else our readers should know about Vectra, Cognito or the future of AI in cybersecurity?

These are incredibly important times in the world of cybersecurity. While it may not be evident to outsiders, the technology stack that is being applied to solving cybersecurity problems is undergoing radical change. This represents an opportunity to solve problems that previously seemed intractable.

But, as is always the case, there are reactionary forces with an entrenched interest in maintaining the status quo who would like to quell the revolution.

The future is bright – now we just have to get there as quickly as we can.

Thank you, Oliver! That was fun and hope to see you back on AiThority soon.

Oliver Tavakoli is Chief Technology Officer at Vectra. Oliver is a technologist who has alternated between working for large and small companies throughout his 25-year career – he is clearly doing the latter right now. Prior to joining Vectra, Oliver spent more than seven years at Juniper as Chief Technical Officer for the security business. Oliver joined Juniper as a result of its acquisition of Funk Software, where he was CTO and better known as developer #1 for Steel-Belted Radius – you can ask him what product name came in second in the naming contest. Prior to joining Funk Software, Oliver co-founded Trilogy Inc. and prior to that, he did stints at Novell, Fluent Machines, and IBM. Oliver received an MS in mathematics and a BA in Mathematics and Computer Science from the University of Tennessee.

VectraVectra® is transforming cybersecurity with AI. Its Cognito™ platform automates cyberattack detection and empowers threat hunters from data center and cloud workloads to user and IoT devices. Cognito correlates threats, prioritizes hosts based on risk and provides rich context to empower response with existing security systems, reducing security operations workload by 32X. The company has been issued five U.S. patents with 14 additional patents pending for cybersecurity applications of machine learning and artificial intelligence. Vectra is headquartered in San Jose, Calif. and has European regional headquarters in Zurich.

The post AiThority Interview Series With Oliver Tavakoli, CTO at Vectra Networks appeared first on AiThority.

]]>
https://aithority.com/interviews/ait-gametheory/interview-with-oliver-tavakoli-cto-at-vectra-networks/feed/ 1
AiThority Interview Series With Jed Crosby, Head of Data Science at Clari https://aithority.com/interviews/ait-gametheory/interview-with-jed-crosby-head-of-data-science-at-clari/ https://aithority.com/interviews/ait-gametheory/interview-with-jed-crosby-head-of-data-science-at-clari/#comments Wed, 25 Jul 2018 12:05:31 +0000 http://melted-cable.flywheelsites.com/?p=13768 Interview with Jed Crosby, Head of Data Science at Clari

The post AiThority Interview Series With Jed Crosby, Head of Data Science at Clari appeared first on AiThority.

]]>
Interview with Jed Crosby, Head of Data Science at Clari
GAMETHEORY
Jed Crosby

Jed has a passion for unlocking hidden value in data and for creating great data-driven software.  When he’s not at his computer, Jed enjoys camping and riding horses, and you might find him doing one of those things in Washington, Idaho, or Northwest Montana.

Holding degrees in applied math from Harvard and the University of Washington as well as a Masters in Scientific Computing from Stanford, Jed has worked as a Software Engineer at Microsoft and was the first person to hold the title of Data Scientist at Salesforce.com.

Clari Logo

Selling is hard. Clari makes it easier. As a team, we’re obsessed with creating software that sales people love to use. We apply Artificial Intelligence (AI) to solve some of the biggest challenges they face in navigating the critical Opportunity-to-Close (OTC) process. We know sales and obsess over how to apply data science and prescriptive insights to make sales teams more productive and successful.

Tell us about your role at Clari and your journey into Artificial Intelligence.

I am head of Data Science at Clari, which means that I get to take a lead role in shaping how we use AI and predictive algorithms to drive better decision making in sales.

My journey into AI began in graduate school where I took my first classes on the subject. I became fascinated by the idea that machine learning could be used to extract insights from data that would otherwise be inaccessible. I sought out data-related assignments in my subsequent work at Microsoft and Salesforce.

There was a small, home-grown search engine embedded in the product I worked on at Microsoft, and I got to refactor this, which taught me a lot about information retrieval and about the abstractions we use to represent text in machine learning problems.

A friend then recruited me over to Salesforce to work on a new recommendation engine they were building, and this was my first taste of using Artificial Intelligence and machine learning to produce really great results in the context of a user-facing data product. I then became the first official Data Scientist at Salesforce and led projects on user profile clustering, customer attrition prediction, business social network analysis, feature adoption analysis, and other topics. AI is absolutely magical when you get it right, and I can’t imagine a better field to be in.

How should product teams and customers differentiate between Data Science and Artificial Intelligence technologies?

I like Foster Provost’s definition of Data Science as the practice of extracting knowledge and useful information from data. Artificial Intelligence, of course, has always been about teaching machines to think. The two fields overlap when machine learning is used to extract insights from data, and this is the zone we operate in at Clari.

Note that, there are many areas of Data Science that do not involve Artificial Intelligence at all. This would include statistics and most traditional BI. It’s in applying machine prediction to new classes of business problems where the exciting new ground is being broken.

How deep is the B2B sales tech ecosystem into AI/machine learning?

We are in the early days of how AI and machine learning are impacting the way we work and how sales teams operate. This is a massive and highly transformative change in the way teams process information, collaborate and make decisions. We’re seeing a shift from historical analysis, reports, and queries to AI-based insights.

The shift is pushing practical information and predictions into the hands of reps, managers and sales execs at key moments across the sales process.

For instance, Clari can help identify risk and upside on opportunities and project how the quarter will end. This transforms not only the way sales teams work together to drive their strategies but also the ability of business leaders to drive the actions needed to hit their numbers. With AI, we’re also able to project the forecast into future quarters and suggest the required pipeline to achieve those projections.

The C-suite including COOs, CFOs, CMOs and CROs are now collaborating around the same set of data points and metrics in a far more integrated and productive way than ever before.

What are the core tenets of your AI and machine learning roadmap at Clari?

There are a few core elements to our product innovation and AI roadmap:

  • Data: We apply AI against a range of important signals from the buying process.  We’re collecting and analyzing not just CRM data but also rep and prospect activity data – all the emails, files and contracts that are flying back and forth, as well as the actual meetings that are taking place. We’re constantly adding additional data signals through a range of integrations with partners and application providers to give sales teams better insights and a better understanding of buyer behavior.
  • Practical use cases: AI insights are packaged into a set of practical applications for sales teams to use every day as they sell, close and forecast their business. For example, we’re analyzing opportunity win/loss data to come up with a CRM score that ranges from 1-99, representing the likelihood of an opportunity to close.  Sales reps and managers can use this score to identify risky deals and know where to focus. Because sales is so much an art, we’ve invested in making AI transparent, providing explanations to give sales teams confidence in the AI and drive adoption.
  • Custom models: We know every organization is different, so we’ve designed our system to automatically build independent models for every company, type of business, product line, and territory, thereby accommodating the unique ways our customers run their sales teams.  Part of this involves automatic segmentation via unsupervised machine learning models, followed by the application of independent models to the resulting segments.

What are your predictions for AI for Sales and Marketing?

AI is here to stay and will continue to become a more powerful force in all areas of business.

AI is changing the course of work in the enterprise, putting a stop to systems that create more work and can’t answer critical questions or predict outcomes.

AI is about saving real people real time.  It’s about giving people answers they would need six months from now, liberating them from manual, mundane tasks and making their lives better.  Sales and marketing are just the beginning: AI will make its way into finance, customer success, and support, impacting all revenue-generating functions and adding a level of efficiency and predictability never seen in the enterprise before.

How can AI/machine learning help to build better CRM and Sales Automation platforms?

AI can help you anytime you have to make the same type of decision over and over again based on the same type of information. For instance, if you’re a sales professional, you’ve probably become really good at quickly deciding whether or not to pursue a new sales prospect by looking up information about them from sources you’ve decided are reliable. This is an important part of your job, and you’ve made this evaluation thousands of times.

But because it’s a repetitive decision that you always make from the same information sources, there’s a good chance that AI can be trained to make the decision for you efficiently enough that you should immediately accept an automatically generated prospects list and just get straight to selling.  CRM and sales are full of opportunities for AI to take over tedious classification tasks, freeing people up to be much more efficient and effective at their jobs.

CRM is going through a renaissance. The original premise of CRM was around a single unified view of the customer, and at that time it was reliant on manually inputting data. Now, data volumes are growing, and data is increasingly coming from multiple sources outside of CRM. This is fueling a wave of new innovation in the way data is collected, analyzed and presented. It’s driving the unbundling of core sales processes and the emergence of new sales intelligence tools that leverage CRM data in addition to email, calendar, and other data signals to drive better decision making in sales.

Who does it best when it comes to leveraging AI for marketing?

Without mentioning any specific companies, we’ve seen a few best practices that early adopters of AI in marketing are using to gain a competitive advantage. First, they’re using AI to intelligently adapt the content of their websites based on predictive analytics around who’s visiting and what their interests are.  It feels like we’re just in the very early days of what can be done with AI-driven personalization.

Second, they’re using AI to provide a more conversational experience for web visitors (via chatbots).  Finally, they are using AI to help prioritize account and lead outreach.  This is not your father’s lead scoring; it’s a whole new generation of AI-driven prioritization technology that involves identifying the attributes of your best customers and contacts based on historical buying patterns and leveraging that to identify new, high-potential targets.

How does Clari make selling better?

We’re solving key sales execution and forecasting problems around productivity, pipeline visibility and forecast accuracy that are common for any sales team. By providing clear visibility into pipeline risk and upside, sales reps know where to focus, managers can immediately spot risk in the pipeline, and execs can forecast with confidence.

How do you consume all the information on AI and other emerging technologies for advertising and branding?

One of the best things about my job is that I am learning all the time, and there are always fascinating new developments happening in AI and related areas.

At the same time, it’s important to recognize that the Artificial Intelligence revolution that is transforming so many industries is really driven by a core set of proven algorithms that happen to be effective in many different contexts.  I definitely keep my ear to the ground for the next new thing, but it’s a balance between keeping up with what’s new and making sure you’re taking advantage of proven techniques in as many places as you can.

Thank you, Jed! That was fun and hope to see you back on AiThority soon.

The post AiThority Interview Series With Jed Crosby, Head of Data Science at Clari appeared first on AiThority.

]]>
https://aithority.com/interviews/ait-gametheory/interview-with-jed-crosby-head-of-data-science-at-clari/feed/ 736
AiThority Interview Series With Anant Joshi, CRO at Factmata https://aithority.com/interviews/ait-gametheory/interview-with-anant-joshi-cro-at-factmata/ https://aithority.com/interviews/ait-gametheory/interview-with-anant-joshi-cro-at-factmata/#comments Tue, 17 Jul 2018 14:54:04 +0000 http://melted-cable.flywheelsites.com/?p=13284 Interview with Anant Joshi, CRO at Factmata

The post AiThority Interview Series With Anant Joshi, CRO at Factmata appeared first on AiThority.

]]>
Interview with Anant Joshi, CRO at Factmata
GAMETHEORY
Anant Joshi

Advertising technology is one of my main passions. I am the CRO of Factmata, where my current focus is business development.

Factmata

We are a London based startup developing a cutting-edge community-driven AI Brand Safety solution for advertisers. Our goal is to reduce misinformation and abusive content on the internet.

Tell us about your journey into Artificial Intelligence? How exciting is it for you to be a part of an AI-driven media platform?

I’ve worked in online advertising for 18 years and have seen many new technologies emerge that have changed the way we work for the better. Today, media buyers and sellers use technology platforms such as supply-side platforms (SSPs), demand-side platforms (DSPs), ad exchanges, data management platforms (DMPs) and customer data platforms (CDPs) to buy and sell media in the most efficient way possible. The smarter we get at buying and selling online advertising, the faster the industry will grow

Artificial intelligence (AI) is the latest development that is revolutionizing the way the online advertising ecosystem works so it’s really exciting for me to be at the forefront of this opportunity. One of the reasons AI is good for media buyers and sellers is that it increases the scale at which things like data analysis can be done. This is extremely important because it allows them to react to changes in real time and capture every opportunity to fine tune their marketing efforts. The insights that come out of AI platforms and the time saved by automating back-end processes can then be used to do more strategic, creative, things and improve the way brands and publishers engage with consumers.

What are the core tenets of your AI roadmap?

Our mission at Factmata is to protect people, advertisers, publishers and other businesses from deceptive or misleading content online. To do this we’re investing in people who are leading experts in natural language understanding (NLU) and AI and feeding as much information as possible into our algorithms so they evolve and scale with the pace of change in the online advertising ecosystem.

In parallel, we’re planning to launch our new platform, called Briefr, which will allow journalists and domain experts to annotate articles and post them on our platform. Briefr works by gathering the data from these annotations and scoring the credibility and trustworthiness of that content. This means that instead of news being shared on likes, it will be shared based on credibility and trustworthiness of that content. The data that Briefr captures will also be fed into our algorithms to help our AI to learn faster and make better decisions based on human input.

It’s this blend of approaches that sets us apart from the rest of the market.

What are your predictions for AI in Programmatic? How do you intend to expand your horizon to meet media revenue objectives and automation standards?

There’s a lot of hype about AI in programmatic advertising and many companies claim their technology has AI capabilities when it doesn’t. Media buyers and sellers are becoming savvier about the smoke and mirrors presented by some companies and are demanding to know more about the AI capabilities of technology before investing. This will separate the wheat from the chaff and allow the AI industry to flourish based on trust and understanding.

I think we will see AI being applied throughout the supply chain to better understand the value of an impression for both a publisher and a brand and create a marketplace where the price paid for online ads is fair for both the publisher and the advertiser based on where the ad will be shown and to who.

We’re not planning to expand into media buying or selling. We’re not in the business of telling people what they should do, we’re here to give people the information they need to make informed decisions. Armed with this information, brands will be able to run more effective, brand safe campaigns based on the level of risk they are willing to take to reach their goals.

One of the problems we see in the market today is that existing brand safety technology works by blacklisting entire domains which drastically reduces the reach of campaigns. If you reduce the reach of a campaign you’re going to see a fall in performance metrics. We don’t do this, we identify individual pages that contain misinformation and warn advertisers that they might not want to advertise on those specific pages. This allows brands to run brand safe campaigns without throttling reach. If we’re successful we could help drive a boost in programmatic ad spending.

When it comes to automation standards, we want to lead the way and establish best practices and educate the market. Our technology is unique so we’re in a good position to do this.

How can AI/machine learning help to build a brand-safe media ecosystem?

Traditional brand safety technology works on rules-based systems whereby the technology reads the content on a webpage looking at each keyword and identifies the page as safe or not safe based on the individual words. Factmata’s AI is able to not only identify the keywords on a page but to read whole sentences and understand the context in which those words are being used.

For example, a home furnishings brand may not want their ads to appear on religious websites, so they might use a brand safety technology to screen out any web pages that contain the word “Buddha”. This would rule out any homestyle content that references a Buddha statue in the context of home decoration. Our AI would read the whole sentence on the homestyle website and flag it as potentially unsafe, but with a score that indicates the level of risk of exposing a brand to an unsafe environment. In this case, it might be 1 out of 100.

It’s these quality scores that enable brands to set their own tolerances in different areas e.g. fake news, politically extreme content etc. By allowing brands to take control of limiting where their ads are shown we are cutting off revenue to those who create misinformation online as for most of them their only source of revenue is online advertising. This is how we’re cleaning up the ecosystem, we’re not censoring content, we’re putting brands in control.

How do you consume all the information on AI and other emerging technologies for advertising and branding?

I like to stay informed by reading case studies, going to events about AI and/or advertising and listening to podcasts which are a great way to discover new ideas.

In 2018-2020, what are the biggest challenges in the adoption of AI/ML? How do you see the media intelligence market evolving in its fight against Fake News and extremist content?

There’s still a lot of education to be done around the capabilities of brand safety AI. One of the biggest challenges is that moving away from existing technologies in a stack is hard to do and poses a risk to performance if it goes wrong. That’s why it’s hard for emerging companies to enter the market because you need a really strong product to be able to disrupt something which on the surface appears to be working well.

When it comes to fake news and extremist content, AI is the only option because it’s not possible for any rules-based or human solution to cope with the sheer volume of content online and the pace at which new content is created.

How do you make AI deliver economic benefits as well as social goodwill?

AI allows people to do things better, faster and at lower costs. Contrary to popular opinion AI doesn’t replace humans – rather it allows them to do more.

A study by PwC predicts that the application of AI has the potential to double economic growth rates in those economies over the next couple of decades by application of AI, and a key reason for that is by improving human productivity by up to 40 percent in the way that they do their work.

Here’s a simple example of how AI can be used to increase productivity and profitability for a retailer. By applying AI to automate back-end processes and data analysis retailers can make optimize stock levels based on the amount of warehouse space available, production time, and sales trends. Having optimal stock levels means more sales and less wastage.

Tell us about your Research programs and the most outstanding digital campaign at Factmata?

A number of pilot tests are underway right now and we’re able to demonstrate the kind of sites and content that are going unnoticed. We’ve proved that our technology works and that it helps brands avoid having their ads displayed in the wrong places.

What is your vision in making Programmatic technologies and AI readily available to local marketing communities?

We need to make sure that any technology is cost-effective and drives business goals. This means really understanding people’s business goals and building a technology that is able to meet short term and long term objectives and adapt to changes in the marketplace.

At the moment, I’m talking to as many people as I can about the challenges they face and the issues they are experiencing trying to solve those challenges. I’m passionate about making technology accessible so I’m also working to educate people about what AI is (and is not) and how it can be applied to marketing and the benefits.

What AI start-ups and labs are you keenly following?

This year, I decided to embrace a vegan diet and started to look into innovative plant-based food production. I discovered a company called ‘The Not Company’ shortened to  ‘NotCo’ that combines AI with food-science to craft cutting-edge plant-based foods. So any companies that focus on impact to climate change and agriculture interest me. Bowery farming seems to be tackling agriculture in a completely new way, which will have huge benefits for humans and the environment. They have a small number of crops at the moment, but it will be interesting to watch the company grow and diversify.

What technologies within AI and computing are you interested in?

I’m interested in AI for good where there are clear benefits for society, and where human input is seen as essential in monitoring the technology. There are innovative AI startups such as Arterys, aiming to reduce the time needed to provide accurate medical diagnoses.

As an AI leader, what industries you think would be fastest to adopting AI/ML with smooth efficiency? What are the new emerging markets for AI technology markets?

The greatest economic gains from AI will be in China (26% boost to GDP in 2030) and North America (14.5% boost), equivalent to a total of $10.7 trillion and accounting for almost 70% of the global economic impact, according to research by PwC.

Other than advertising, the legal, finance and healthcare industries have a lot to gain from AI. When looking at indicators that an industry or company will benefit from AI you need to look at several things:

  1. Ratio of junior staff to senior staff – the more junior staff there are, the greater the potential for AI
  2. Data – the greater a business’s data pool is the more likely it is that AI can drive value because of its ability to process huge amounts of data in a fraction of the time it would take a human
  3. The number of variables that impact business performance – the more influences there are on a business the harder it is to predict the impact of change. AI can model different scenarios and change the models in real time as markets shift

What’s your smartest work related shortcut or productivity hack?

Slack is brilliant for cutting down on emails and communicating with everyone in a team quickly. The biggest time saving for me comes through being able to share documents and thoughts without getting lost in long email chains. We also use Streak CRM and Airtable for keeping track of our projects.

Tag the one person in the industry whose answers to these questions you would love to read:

Matias Muchnick – CEO & Founder at NotCo

Thank you Anant! That was fun and hope to see you back on AiThority soon.

The post AiThority Interview Series With Anant Joshi, CRO at Factmata appeared first on AiThority.

]]>
https://aithority.com/interviews/ait-gametheory/interview-with-anant-joshi-cro-at-factmata/feed/ 175
AiThority Interview Series With Robert Schwarz, Managing Director ANZ, Nuance Communications https://aithority.com/interviews/ait-gametheory/interview-with-robert-schwarz-managing-director-anz-nuance-communications/ https://aithority.com/interviews/ait-gametheory/interview-with-robert-schwarz-managing-director-anz-nuance-communications/#comments Wed, 27 Jun 2018 12:30:42 +0000 http://melted-cable.flywheelsites.com/?p=12309 Interview with Robert Schwarz, Managing Director ANZ, Nuance Communications

The post AiThority Interview Series With Robert Schwarz, Managing Director ANZ, Nuance Communications appeared first on AiThority.

]]>
Interview with Robert Schwarz, Managing Director ANZ, Nuance Communications
GAMETHEORY
Robert Schwarz

Robert Schwarz is responsible for driving Nuance’s market share and operations in Australia and New Zealand, with a particular focus on intelligent speech language technologies. Prior to Nuance, Schwarz was director of OEM and Managed Cloud-as-a-Service businesses for SAP Australia and New Zealand, where he was responsible for developing new routes to market for SAP software solutions. He has also held senior strategic and management roles at IBM, Business Objects and Oracle.

Nuance Logo
Nuance Communications is the pioneer and leader in conversational and cognitive AI innovations that bring intelligence to everyday work and life. We deliver solutions that can understand, analyze and respond to human language to increase productivity and amplify human intelligence.  With decades of domain and artificial intelligence expertise, we work with thousands of organizations – in global industries that include healthcare, telecommunications, automotive, financial services, and retail – to create stronger relationships and better experiences for their customers and workforce.

How does Nuance Enterprise leverage AI to facilitate autonomous learning and improvement?

AI is fundamental to enabling true conversational experiences between humans and machines and at the core of all Nuance Enterprise solutions.

When deploying Nuance Nina – the enterprise virtual assistant (VA) – the AI engine within the solution allows for an organization to quickly get a VA up to speed on the answers to most frequently asked questions. Nina does this by utilizing Machine Learning algorithms to automatically shift through data (like chatlogs) to bootstrap the initial Natural Language Understanding (NLU) model. All of this whilst also identifying the key intents to create automated responses with a resolution or next best step.

Once Nina is deployed, that AI engine monitors conversations and is able to see when a human’s request is answered correctly, noting the intent processed was accurate. If Nina does not understand a request or isn’t sure what the response should be, a live human agent is looped in to engage the customer. This provides an almost always positive experience for the customer, and ensures a seamless customer journey – they stay in the channel they started in, no escalation, no waiting, and they become used to the VA working for them. Nina also logs the answer for the intent via supervised, highly-automated NLU learning to continually train the VA.

Nuance’s voice biometrics technology also has AI at its core. With each authentication of a consumer’s voice, the solution processes hundreds of data sets, analyzing everything from that individual’s personal traits (their voice, shape of their vocal cords) to their behaviors (how they speak, their accent) to validate if they are who they are claiming. This happens in seconds and is only possible through the power of AI.

Which are the industries that you have noted early adoption of the technology, which ones are emerging?

Organizations across Australia are increasingly investing in VA technologies for their ability to provide a genuine, tailored service that delivers customers human-like conversational experiences. These are personalized in a way that can scale to the needs of today’s always-on consumer. With ever-evolving customer expectations and the proliferation of consumer touch-points, businesses must evolve their customer service approach to easily and cost-effectively extend their investment to both new and emerging channels.

Conversational AI is being used across a variety of industries including travel, FMCG, government, insurance and banking. For example, Nuance partnered with ANZ Bank to be the first Australian bank to roll out voice biometrics to mobile banking for step-up authentication for transactions over $1,000. Additionally, Alex, is IP Australia and Australian Taxation Office’s virtual assistant solution, powered by Nuance’s ‘Nina’ Conversational AI customer service solution, was the first of its kind to be implemented within the Australian Federal Government.

What are the challenges that Nuance’s voice recognition can address for the IoT market?

Connected devices are becoming increasingly sophisticated, and as a result, their interfaces are becoming equally complex. With each IoT device having its own unique facets, the onus is placed on the users to learn how to use numerous buttons, menus and screens. Voice recognition overcomes this challenge by simplifying the customer experience. Voice acts as a universal interface and makes accessing these devices easy and seamless, in a way that is natural to the user.

What are the challenges in text-to-speech? How does Nuance tackle them to be one of the market leaders in the space?

One of the challenges facing text-to-speech (TTS) technology is the inability to provide a high-quality, natural service with minimal errors. TTS technology should enable more natural interactions between human and machine, not the opposite.

Nuance TTS expertise has been perfected over 20 years. In fact, earlier this year, Nuance announced that it has advanced TTS technology with deep neural networks (DNN) to deliver a new benchmark standard. This development reduced errors by 40 per cent, in comparison to previous speech synthesis techniques. By pushing ourselves to develop more natural and expressive speech synthesis, we have created technology that can pronounce challenging words better than most humans. Have a listen here.

Can you tell us about the competitive market of virtual assistants and how Nina is able to differentiate its technology from the others?

Addressing rising consumer expectations for seamless services amidst an evolving landscape of digital and mobile channels is challenging.

There are many chatbots available in the market, many which are simple question/answer bots, and not powered by Conversational AI. As Nina is a Virtual Assistant that is powered by Conversational AI, customers can be engaged via a real-time, intelligent conversation.

As part of the Nuance Digital Engagement Platform, Nina consists of the following differentiating capabilities:

  • Targeting engine chooses between live agent, guides or VAs, based on user behavior, profile and page, on a per-conversation basis to serve the right digital interaction to the right visitor at the right time.
  • Omni-channel conversational design: VA and live chat integrate into the same elegant, floating, engagement window, including elegant multi-device support.
  • Effortless conversation between a brand and the consumer with both a VA and live chat.
  • Seamless transfer from any offsite or onsite channel to live interactions including contextual information for a seamless consumer interaction. For example, from TV ad to SMS chat, from IVR to digital, from social media to the brand’s mobile website, from VA to live agent.
  • Integrated reporting dashboard: Analytics from all engagements (guides, VA, live chat) can be used to measure and optimize KPI along the funnel – from the business rules to automated conversation to the live chat engagement – to provide the interaction leading to optimized revenues, costs and user satisfaction.
  • Continuous learning loop: Live agent scripts are influenced by, and VA is trained based on, visitor behavior, past interactions and existing live chat transcripts. Reporting and insights are then used to improve the targeting engine, routing, VA and live chat behavior.

Further, in 2017, Nuance was ranked the number one chatbot/virtual assistant vendor for enterprise customer service by leading research and advisory firm Forrester. Today, there are over 6,500 enterprises using Nuance’s self-service technologies, processing an estimated 16 billion transactions each year. Nuance’s AI innovations power a new generation of customer engagement apps that enable enterprises to communicate with consumers anytime, anywhere, and through virtually any channel. Nuance is the only vendor to combine the tooling, intelligence and analytics of natural language processing (NLP) and cognitive technologies, as well as integrated security, to deliver automated and assisted solutions targeted to Enterprise needs.

Help us understand how Nuance is able to offer customized services across the industry; how would Finance find personalized use?

Nuance Enterprise specializes in conversational AI and secure, convenient and unified customer engagement across channels. A lot of time is spent assessing a business to determine which solution is best suited.

Our omni-channel platform and unified tooling enable a design-once, and deploy-many type approach. This provides a true benefit to organizations by offering a consistent consumer experience, without implementing and maintaining a number of one-off integrations

Nuance is known for its deep domain expertise in vertical industries, and a team of experts that help large, complex organizations implement the most forward-thinking conversational technologies. For industries like finance where regulations are high and needs around consumers data are paramount, Nuance works closely with customers to develop solutions that meet all their needs – ensuring unique business challenges and situations are addressed in a customized way.

What does the future of voice recognition hold in store in your opinion?

As the world becomes more connected, and conversational AI advances, Nuance believes voice will become the most common user interface. Businesses need to compete for customer loyalty, heightening the importance to meet consumers on the channels they most engage. As voice is the most natural, convenient way for humans to interact with machines – amplifying their intelligence and making everyday tasks simple – we anticipate voice recognition will be the preferred customer user interface providing a cohesive experience that is cost-effective and supportive of the modern environment.

What’s next for Nuance; any big news you’d like to share?

Nuance has pioneered and advanced the conversational AI marketplace for decades and will continue to bring new innovative solutions to market that meet the needs of today’s largest global organizations. The growing list of global enterprises successfully implementing Nuance technology comes from a diverse set of industries and are seeing rapid and widespread adoption of the technology. This year, you can expect we will have some significant announcements around the robust AI engine that is powering our solutions and an increased focus on delivering a secure and enhanced customer experience for enterprises using our virtual assistant and speech services.

Thank you, Robert! That was fun and hope to see you back on AiThority soon.

The post AiThority Interview Series With Robert Schwarz, Managing Director ANZ, Nuance Communications appeared first on AiThority.

]]>
https://aithority.com/interviews/ait-gametheory/interview-with-robert-schwarz-managing-director-anz-nuance-communications/feed/ 5
AiThority Interview Series With Seamus Abshere, Co-Founder and CTO, Faraday https://aithority.com/interviews/ait-gametheory/interview-with-seamus-abshere-co-founder-and-cto-faraday/ https://aithority.com/interviews/ait-gametheory/interview-with-seamus-abshere-co-founder-and-cto-faraday/#comments Wed, 20 Jun 2018 13:00:04 +0000 http://melted-cable.flywheelsites.com/?p=12006 Interview with Seamus Abshere Co-founder and CTO at Faraday

The post AiThority Interview Series With Seamus Abshere, Co-Founder and CTO, Faraday appeared first on AiThority.

]]>
Interview with Seamus Abshere Co-founder and CTO at Faraday
GAMETHEORY
Seamus Abshere

Seamus oversees infrastructure, software, and data security policies and practices. Seamus has an extensive background in IT security, cloud computing, database technology, and machine learning. He graduated from Princeton University. Over the last 13 years, he’s developed and implemented IT security policies and strategies for platforms handling some of the world’s most sensitive consumer data.

 

faraday
Faraday optimizes every stage of your B2C revenue journey, from acquisition to retention. Finally — machine learning and big data that actually work for your business. We believe data is capital. It’s the purest, most powerful fuel your business has ever had. If you’re not putting it in a rocket ship, you’re burning it in a bonfire, every day.

Tell us about yourself and the journey to co-founding Faraday.

Faraday grew out of a desire to apply Machine Learning to data that was otherwise just lying around. We had some early clients with customer lists and mailing lists, and they basically didn’t know what to do with them – they played the traditional acquisition / upsell / retention game.That involves a lot of guesswork – lucky guesses make you look great, but in the end it was just luck, not strategy.

We started to formalize how we looked at the data, and put it in terms that advanced algorithms could process. I will never forget the day a client ran a secret test against us and then showed us the internal report on it: we had beaten their models in the real world by a significant amount.

In your opinion, how have AI technologies been a game-changer for marketers and how they are able to do their job?

AI is helping marketers do their jobs more efficiently by automating and optimizing processes like ad bidding, web design, copywriting, and campaign targeting.

The most important aspect of marketing is truly understanding your ideal customers: who they are, their interests and hobbies, and ultimately, why they choose to engage with your brand. The answers to these questions guide marketing strategies. So, while optimizing ad bidding is undoubtedly important, ensuring your ads are targeted at the right audience with the right message is critical. That’s essentially how we help marketers leverage AI.

Read More: Avionos Releases New Data Revealing How Consumer Expectations Are Driving Retail Strategies

Tell us about the FIG database and how businesses can leverage this knowledge resource.

The Faraday Identity Graph (FIG) is Faraday’s nationwide consumer database containing more than 400 demographic, psychographic, and property attributes on approximately 235 Million US consumers. This data is used to enrich our client’s existing customer and prospect data with hundreds of additional attributes. That rich, cross-referenced data is then used to train our machine learning algorithms, which build models capable of predicting — with a high degree of accuracy — our clients’ desired outcomes. For example, a client may want to predict whether a customer is likely to churn. We’ll use the client’s enriched customer data to train the machine learning engine to build a model that can identify churn-prone customers, enabling our client to take preventative actions with those customers.

Can you take us through how Faraday’s technology is used to calculate lead scores?

We build a model that differentiates between positive outcomes and everything else. Sometimes we have data about really negative outcomes; sometimes we don’t (this is a classic problem in machine learning). To get a lead score, we give this model new data that it has never seen before and it comes back with a prediction and a confidence. We combine that confidence with the expected rate of positive outcomes and arrive at a score between 0 and 1.

What kinds of insights are drawn to facilitate predictive targeting for clients? How customizable is this process?

Data-driven insights are especially helpful in optimizing creative processes like content creation and ad creative. Because predictive models can identify individuals likely (or unlikely) to convert on the desired outcome, they’re great for building propensity-based audiences used for targeting purposes. When you combine the two, you can personalize your campaigns for audiences that are likely to purchase your products. We use our clients’ enriched data as the basis for analysis and modeling, and every insight and prediction is innately customized for each client.

AI is often seen as an expensive offering. How does Faraday cater to SMEs?

Traditionally, operationalizing AI requires large datasets, a machine learning engine, data scientists to build and validate predictive models, and software engineers to develop systems to feed predictions to the necessary destinations. Faraday includes everything needed to operationalize AI, and is streamlined for consumer-facing organizations. Thanks to our pure focus on the consumer journey, we can offer actionable AI faster and more cost-efficient than in-house or blank-slate AI solutions.

What role does an outreach partner play in the Faraday ecosystem?

Faraday offers an app-like store for outreach partners: once you have made deliverables and predictions on the Faraday app, you can push them out to any of the supported partners. Our clients require this — they don’t just want Facebook ads, for example — but we also customize each output so that it maximizes the effectiveness on the particular platform. For example, if you are marketing to families, we may generate multiple names for every address that you want to reach, to ensure that the ad network can find somebody.

How does Faraday integrate with other enterprise software that a company may already be using?

As with our outreach partners, we have something similar for existing enterprise systems (business intelligence, data warehousing, etc.). Chances are a system is supported out of the box and then goes through an authorization process to let Faraday make predictions using the data stored on that system. Clients keep their existing systems most of the time and Faraday becomes the AI pipeline that’s hooked up to it.

How does Faraday help businesses prevent churn?

It’s all about the machine learning process. When a client tells us they want to prevent churn, we’ll instruct our machine learning engine to build a model that can identify churn-prone customers. Once the model is deployed, our clients can build audiences of churn-prone customers, push those audiences directly to key outreach channels, and intervene with a special offer or promotion before it’s too late.

Alternatively, you can proactively prevent churn by using the model to identify churn-prone individuals before they actually become customers. This is especially useful when you want to avoid using your marketing budget to target individuals who are likely to churn regardless.

Congratulations on the recent funding! What’s next for Faraday?

We are growing our FIG dataset by an order of magnitude. We are adding bleeding-edge modeling capabilities such as deepnets and hybrid models, where multiple machine learning techniques (logistic regression, random forest, deepnet) vote on a single prediction. We are expanding the “menu” of AI options that every Faraday customer gets out of the box, and figure out how to automatically prune it down to what best serves each customer.

Thank you, Seamus! That was fun and hope to see you back on AiThority soon.

The post AiThority Interview Series With Seamus Abshere, Co-Founder and CTO, Faraday appeared first on AiThority.

]]>
https://aithority.com/interviews/ait-gametheory/interview-with-seamus-abshere-co-founder-and-cto-faraday/feed/ 2
AiThority Interview Series With Kalin Stoyanchev, Head of Blockchain + RNDR Project Lead https://aithority.com/interviews/ait-gametheory/interview-with-kalin-stoyanchev-senior-associate-product-development-digital-growth-rndr/ https://aithority.com/interviews/ait-gametheory/interview-with-kalin-stoyanchev-senior-associate-product-development-digital-growth-rndr/#comments Tue, 12 Jun 2018 12:30:22 +0000 http://melted-cable.flywheelsites.com/?p=11517 Interview with Kalin Stoyanchev, Senior Associate, Product Development & Digital Growth - RNDR

The post AiThority Interview Series With Kalin Stoyanchev, Head of Blockchain + RNDR Project Lead appeared first on AiThority.

]]>
Interview with Kalin Stoyanchev, Senior Associate, Product Development & Digital Growth - RNDR
GAMETHEORY
Kalin Stoyanchev

Kalin is an entrepreneur and blockchain specialist with a passion to bring the technology to the forefront of widespread utilization. Starting as an early enthusiast/trader of Bitcoin in early 2011, he has since shifted focus to the future of blockchain technology through disruption and new business integration. He is currently the VP of Blockchain and Distributed Systems at OTOY and serves as the Project Lead at RNDR.

RNDRThe Render Network, through the use of RNDR tokens, will be the first network to transform the power of GPU compute into a decentralized economy of connected 3D assets. OTOY’s vision is to distribute the framework of the existing rendering service in OctaneRender using RNDR, a digital token built on the Ethereum blockchain.

Tell us about yourself and how you came about creating RNDR.
I’ve always been interested in finance and technology; studying both over the years. Cryptocurrency caught my attention since it has pieces of both tech and finance. Taking that further, I’ve been interested in blockchain ever since I was first introduced to Bitcoin in 2011. As for RNDR, I was familiar with the great advances in AR/VR technology over the past 4-5 years – OTOY has always been a pioneer in this space. When the team at OTOY told me that they were looking to create a distributed network of GPUs that would help expand the world’s access to this new 3D content, I was hooked. Fast forward to a year later, and I’m so glad that I’m able to be a part of this great team and project.

Help us understand the concept of “connected GPU assets”. Tell us about the patent.
To create 3D assets, you need processing power; unfortunately, a lot of budding content creators do not have enough GPU power to bring their creations to life in a way that showcases the true quality of their vision. You could always go to an external render farm, but that process is insecure. Purchasing GPUs can be a costly endeavor, so that is also out of the question for most.

Now, imagine the ability to create and design your 3D assets, then just click a few buttons and be able to send off the render by idle GPUs all across the network? That is what we are trying to build – easy access to distributed GPU computing as a foundation for all of the additional features that we are looking to port onto the network.

Jules (the co-founder and CEO of OTOY) has had an idea of doing something like this since back in 2009, hence the patent.

What is the role of Ethereum and how does RNDR leverage the Blockchain?
Building on Ethereum allows us to quickly scale and create a platform that uses the underlying blockchain to act as a reconciliation and processing agent for render jobs between renderers and content creators. This removes the need for traditional centralized server infrastructure and creates a platform for service exchange. Eventually, we will build our own blockchain to tackle the problem of inefficient mining. Our own blockchain platform will also give us tremendous flexibility in additional features.

Who can be a part of the RNDR network? What are the opportunities that you foresee?
The beauty of the RNDR Network is that anybody with a GPU can participate. We want to really open up new users to the world of AR & VR.

Because users are going to dedicate their computing resources to generating assets and getting tokens in exchange, it will open up opportunities for users to learn about the future of GPU compute on a node-base system on the blockchain.

They can also explore how to acquire tokens through our system and how RNDR can be used to help users be more efficient with their hardware/GPUs.

Can you help us understand the impact that RNDR can have in healthcare, VR, gaming for developers and artistes?
Having a distributed and easily accessible network of GPUs will be valuable to all industries, including healthcare. Not to get too futuristic, but imagine a doctor being able to render prosthetics for patients based on their needs, instantly being able to show the patient what they are working on. Additionally, this network is a no-brainer for VR and gaming developers, as it provides the GPU power needed to visualize the assets that they are designing and creating almost instantly.

How does the token work, what was the response? What’s the next phase for RNDR?
Currently, we are in Phase II of development, with the token already functional in Phase I through our initial feature launch. We are growing our users and the community surrounding the project, while simultaneously working on our partnerships with Decentraland and Sia. We are excited to show you the next phase of our network as we continue to add new features and capabilities for the RNDR Network.

Thank you, Kalin! That was fun and hope to see you back on AiThority soon.

The post AiThority Interview Series With Kalin Stoyanchev, Head of Blockchain + RNDR Project Lead appeared first on AiThority.

]]>
https://aithority.com/interviews/ait-gametheory/interview-with-kalin-stoyanchev-senior-associate-product-development-digital-growth-rndr/feed/ 1
AiThority Interview Series With Jeff Gallino, Founder and CTO, CallMiner https://aithority.com/interviews/ait-gametheory/interview-with-jeff-gallino-founder-and-cto-callminer/ https://aithority.com/interviews/ait-gametheory/interview-with-jeff-gallino-founder-and-cto-callminer/#comments Wed, 06 Jun 2018 13:00:45 +0000 http://melted-cable.flywheelsites.com/?p=11332 Interview with Jeff Gallino, Founder and CTO, CallMiner

The post AiThority Interview Series With Jeff Gallino, Founder and CTO, CallMiner appeared first on AiThority.

]]>
Interview with Jeff Gallino, Founder and CTO, CallMiner
GAMETHEORY
Jeff Gallino

Jeff oversees research, language development, and future product direction. Gallino was President and CEO during CallMiner’s first five years. Jeff has more than 25 years of experience delivering complex software and hardware solutions to enterprise and government customers. Prior to founding CallMiner, Jeff worked at ThinkEngine Networks, Grant Thornton Consulting, and served for 11 years in the US Air Force.


CallMiner’s market leading cloud-based speech analytics solution automatically analyzes contacts across all communication channels: calls, chat, email, and social. They offer real-time monitoring and post-call analytics, delivering actionable insights to contact center staff, business analysts, and executives.

Tell us about yourself and your journey to start CallMiner.
I founded CallMiner in 2002 after realizing no one was leveraging speech recognition and analytics together to analyze large call sets. I had experience with speech recognition from my job at the time and put the analytics piece together after attending a financial analyst conference and listening to analysts bemoan the time it took to listen and classify all the quarterly earnings calls. There were no vendor solutions for both transcribing and automating the review process, and I realized then what a competitive advantage the two together could provide for companies that had large call volumes to review.

What were the traditional methods for call centers to get feedback about their customer interactions? How does it compare against CallMiner technology?
Traditional methods have involved a random selection of a portion of calls from each agent for the month, typically only 1-2% of their total call volume.

Aside from being a labor intensive manual process, this (traditional) approach to measuring an agent’s performance produces a completely inaccurate reflection of the agent’s performance because it’s such a small sample set. Inaccurate feedback leads to lack of improvement in performance and customer experience. This also drives agent attrition, one of the largest challenges contact centers face.

CallMiner Eureka automates the process and reviews 100% of calls to provide an unbiased and comprehensive view of agent activity, which means turnaround time to performance improvements is substantially faster. In addition, there is a treasure-trove of other business intelligence and customer insights that can be extracted from customer conversations that typically go untapped through the existing agent quality monitoring processes.

Read More: CallMiner Announces Eureka Coach To Provide Contact Center Optimization With Continuous Insight And Closed-Loop Case Management

What is Automated Call Scoring and what kind of data can one hope to capture from customer support calls, texts, emails and social?
Automated call scoring is an AI-fueled score based on a combination of customizable call characteristics such as:

  • Expressions of dissatisfaction and empathy
  • Escalations and sales objection handling
  • Metrics such as silence and acoustic agitation
  • Call attributes such as whether a sale occurred or not

Scoring can be customized and leveraged for any number of business objectives including agent quality or performance, customer satisfaction, compliance risk, customer churn risk, and fraud prediction.

What are the challenges of speech to text, and how do you try to overcome these?
Speech to text is an evolving technology that depends on audio quality for accurate transcription. Using a telephony infrastructure and/or recording technology that allows access to high-quality audio, ideally speaker separated, will produce the best results. However, the key to gaining insights from speech analytics is a solution that understands the audio transcription at the contextual level. Categories and semantic building blocks within Eureka can recognize word phrases even with transcription inaccuracies to provide meaningful data without a transcript that is 100% accurate. Scale is another challenge. Processing a small number of calls is easy with modern tools, but we really had to solve some interesting challenges to scale to simultaneously process 100,000+ calls in real time.  Before any really interesting ROIs can be tackled, you have to have this kind of scale.

How can call centers leverage CallMiner’s technology to improve the productivity of their agents?
Automated scoring of 100% of interactions can help agents understand where they need improvements in customer handling and where they may be wasting valuable time. Objective scoring on every call provides standardized, data-driven coaching for targeted review sessions with supervisors and opportunities for self-improvement in between coaching sessions by having direct access to daily feedback on performance. Specifically, analytics can identify abnormal instances of silence that represent optimizations for call handling. Real-time alerting can guide agents to next best action in context to the conversation, driving to rapid resolution.

Read More: CallMiner Eureka Named Best Speech Analytics Solution And A Top 10 Contact Center Technology

What kind of performance metrics does Eureka Coach provide to establish how effective customer support has been? How does the technology calculate emotion?
Coach provides agents and supervisors access to automated scoring on any number of performance attributes for 100% of customer interactions to understand performance trends. Automated scoring in CallMiner Eureka can be customized to any number of key indicators that are most important to the company and the specific agent group.

Top level metrics could include an overall agent performance score, customer satisfaction, sales effectiveness, compliance and call handling efficiency. Elements of these scores may measure presence of language such as empathy when dissatisfaction is expressed or understandability issues, combined with acoustic metrics such as acoustic emotion or agitation, call duration, percent silence, and other metadata values such as sales closed or not, and dollar value of sale.

Tell us about the real-time redaction feature and how CallMiner ensures the privacy of customers. 

Real-time redaction grabs audio and call metadata while the call is occurring, recognizes and targets PCI data and other sensitive data such as social security, account, and credit card numbers within call audio and transcription and removes the data for customer security. Redacted audio and transcripts can be safely held to meet compliance and regulatory requirements without the threat of internal or external sensitive data exposure.

How does Eureka Alert help call centers take immediate action in real time?
Eureka Alert provides real-time alerting to agents, supervisors, or managers to allow for action to be taken in context with what is occurring on the call. Alerts can trigger in response to lack of language – such as a reminder to an agent to read a specific disclosure statement – or in response to something the customer states – such as a mention of a competitor promotion to guide the agent to offer a competing promotion. Alerts can also trigger messaging such as emails to senior management for sensitive issues, or integrate into a supervisor’s desktop monitoring application when a call is escalating and may require intervention.

Providing real-time reminders and suggestions gives agents the opportunity to quickly address sales objections, cancellation threats, competitor mentions, and special offers based on customer buying or churn signals. By offering the right product or solution at the right time based on what the customer is sharing with the agent, companies can reduce customer attrition and improve upsell rates as well.

Read More: More Than 50 Leading Companies Announce New Products, Services, Demonstrations & More At ICMI Contact Center Expo 2018

Which industries have exhibited higher traction? To what extent can one customize the solutions to suit industry and business goals?
We have customers that span the range of industries as our solution can be customized to suit any industry and goals.

We have seen financial services lead the way in adoption of speech and engagement analytics but have also seen recent uptick in more healthcare, travel and tourism, insurance, and retail companies adopting speech analytics for their customer service and claims departments.

Tell us about CallMiner’s technology offering for the healthcare industry? What is the future impact that you are moving toward?
Healthcare is moving toward a strategic focus on patient experience both within the medical facility and throughout the billing process. Healthcare organizations have realized that a drop in satisfaction in either process can taint the whole experience so they must understand the full patient experience even after care may have ended. By leveraging speech analytics, healthcare payers and providers can understand where patients are being unnecessarily routed or put on hold, what’s driving the majority of their calls, and how to better staff their contact centers. Additional impacts can include patient safety – ensuring proper triaging or surgery preparation procedures for healthcare providers – or customer retention for healthcare payers during open enrollment.

What can you share with us about future plans?
CallMiner is focused on further developing our AI-driven functionality with enhanced discovery and predictive modeling tools. By using a combination of AI, NLP, and Machine Learning, CallMiner aims to provide more automation in the analytical process such as automatically correlating events on interactions to outcomes, and simplifying predictive model development and automated interaction classifications.

Thank you, Jeff! That was fun and hope to see you back on AiThority soon.

The post AiThority Interview Series With Jeff Gallino, Founder and CTO, CallMiner appeared first on AiThority.

]]>
https://aithority.com/interviews/ait-gametheory/interview-with-jeff-gallino-founder-and-cto-callminer/feed/ 1
Interview with Alexandre Debecker, Chief Growth Officer – ubisend https://aithority.com/interviews/ait-gametheory/interview-with-alexandre-debecker-chief-growth-officer-ubisend/ https://aithority.com/interviews/ait-gametheory/interview-with-alexandre-debecker-chief-growth-officer-ubisend/#comments Tue, 22 May 2018 13:00:37 +0000 http://melted-cable.flywheelsites.com/?p=10555 Alexandre Debecker

The post Interview with Alexandre Debecker, Chief Growth Officer – ubisend appeared first on AiThority.

]]>
Alexandre Debecker
GAMETHEORY
Alexandre Debecker

Alex Debecker is the founder and CGO of ubisend, the leading chatbot and AI solution developing company. ubisend helps businesses solve real problems by enabling them to communicate effectively with their audience and facilitating internal team engagement.

Ubisendubisend is the leading chatbot and AI solution developing company. We deliver intelligent, on-demand experiences across 29 channels.

Tell us about the journey that led you to start a chatbot company.

The journey of ubisend started 15-odd years ago. For over ten years, we have been developing health-critical mobile messaging campaigns in low-resource countries. Back then, sending SMS content across the world was a laborious process, messages were drafted into spreadsheets and forwarded to telecom providers. It was neither secure nor efficient, and we would not get any success and deliverability metrics back. There was a definite pain-point.

We knew there was something to be done. So, an internal team developed a platform that would allow us to plan, write, schedule, and send messages effectively. More importantly, it would also allow us to report on the success of these messages — delivery, open rate, and more.

The platform allowed us to go beyond just sending messages. It enabled us to create AI-based models that helped us time message sending better, ensuring the correct person received valuable advice at the right time, and much more.

About three years ago, the world witnessed a massive shift in the way consumers message one another. SMS had been old school for a while, and finally, mobile messaging apps (Facebook Messenger, Telegram et al.) started opening up their APIs. It was the catalyst that enabled us to be one of the first companies in the world to dive head first into humans having useful two-way, natural, conversations with machines.

We brought our messaging knowledge, our AI-power, our client base; and started building chatbots across all major platforms.

What are the challenges with traditional messaging? How do chatbots solve these?

Traditional messaging between businesses and consumers (or businesses and employees) is broken.

For the past 20 years, the relationship has been one-sided. On the one hand, businesses push messages to their customers whenever they see fit. While on the other hand, consumers cannot reach businesses when they need them. We have all experienced the dreadful ‘thank you for your message; we aim to reply in 3 to 4 business days’.

Chatbots solve this (onse-sided conversation). They turn B2C, one-sided broadcasting, into a two-way conversation. Instead of having to wait, customers can now prompt a chatbot and get answers and help immediately.

Read More: Formative Chatbot Integrations Coming To Workplace By Facebook

Which industries are most impacted by your chatbot creation? What is the core functionality of the chatbot here?

At present, by far, the HR space is the one benefiting most from our chatbots. When we think about it, it is not that surprising.

HR is often an afterthought within a company. Budgets tend to go to departments that directly grow revenue, like marketing or sales. A company then grows and finds itself with a tiny, overworked HR team.

The solution chatbots provide is to remove a chunk of the burden on HR teams. As an example, a common HR deployment prevents around 40% of staff inquiries hitting HR desks. Almost overnight, the HR team gets lots of their time back. This is achieved by gathering a company’s internal documents, applying our machine learning algorithms and turning the first iteration of their chatbot into a knowledgeable member of the HR team. No expensive integrations or inside development required.

On top of this, an HR chatbot solution is not customer facing. It is a low-risk approach for a large company. It is a way to test the new technology without putting anything in front of the consumer. It is the opportunity to see what the technology can do and build a business case before further investing.

How does the process of development change across channels of communication; Facebook Messenger versus WeChat, for example?

Back when we first developed our platform, our goal was to build something that would allow our users (and us) to deploy chatbots across multiple channels, without hassle.

The ubisend platform, which sits at the core of all our chatbots and manages the sending/receiving of messages, analytics, human takeover, and more, is entirely ubiquitous.

We built it in such a way that deploying a chatbot onto any available channel is a flick of a switch; the platform does the hard work for us. It automatically downgrades/upgrades message content, delivers rich media where and how a channel accepts it and always gives the best experience possible. The use case is phenomenal; one person can be talking to a single bot via SMS, another via Facebook and someone else via live-chat. It is entirely cross and multi-channel, the experience is practically the same and all conversations, human-takeovers reporting and metrics are in one place.

What’s the top misconception people have about bots?

The industry has gone through a couple of phases. A couple of years ago, we all got excited about chatbots. Channels like Facebook Messenger opened up their APIs and helped drive rapid adoption. Apparently, chatbots were going to save the world.

Then, the hype slowed, and people become a little pessimistic. At the time, most of the chatbots in the wild were nothing more than decision trees, and the industry lost a bit of its sparkle.

Going through these two phases is common. The Gartner hype cycle shows new technologies need to go through this hype phase, then disillusion phase, to finally creep back up and plateau at a productivity level.

I believe we are now slowly going back up and heading towards the productivity plateau.

As the cycle unfolded, ubisend was smack-bang in the middle, helping the technology evolve. We were happy chatbots were suddenly popular (as we had already been building them for years) but knew what was likely to happen.

Platforms opened up and enabled anyone to build low-quality chatbots easily; it was quite clear the industry would quickly reach the low disillusion point. We knew developing effective chatbot technology is hard and required more than a simple flowchart and a few clicks of a mouse.

Today, as we head towards the productivity plateau, we need to fight the idea that chatbots are just decision trees or flow-based question/answer systems.

We also expect that once we reach productivity, the industry will drop the word ‘chatbot’ as it has a gimmicky connotation.

We do not build chatbots; we make complex machine-led conversational agents that are solving real business problems. If we do need to shorten it, something like ‘conversational software’ is much more appropriate.

Read More: SignalWire’s Next-Gen Communication Platform Builds Telephone Gateway To Google Cloud AI

According to you, how important is personality when developing a chatbot and how does it help?

Chatbots enable us to enter a new level of creativity. Companies can go beyond iconography, colors, and images and play with language, persona and tone of voice — at scale.

Developing a chatbot’s personality is important. After all, your chatbot may become your customer’s first contact with your company. You need it to align with your company’s culture, and it brings a whole lot of fresh internal decisions to the table.

One of the first things we do with a new client is to lead a chatbot personality workshop. After all, it is something most marketers/PR/creatives have never worked on. We ask ‘so, how does your company talk?’

Should your chatbot be friendly? Formal? Does it speak like a teenager? Does it have a gender? What is its name? Does it use emojis? Does it LOL?

A well-crafted chatbot personality helps users on their journey. Should your chatbot not fit the culture of your company, people may feel awkward talking with it. Also, to keep engagement high, it is important to improve not only your chatbot’s functionalities, but also its conversational UX and language. Test everything.

How does ubisend integrate with enterprise technology?

A big part of our focus is enterprise clients. Many of our customers are Fortune500 businesses. As such, everything we do in this space is custom built.

We do not believe in templated, out-of-the-box solutions that we copy/paste from one client to the other. Each chatbot is unique, and we treat it as such. Integrating with established, internal enterprise technology is a must.

Our typical approach is to evaluate the tech stack we are going to work with during the discovery phase. We will meet with the leaders of each team to find out what we need to integrate with. If they have anything already built that will help us, we will dig into that. If not, we figure out who to talk to about making it happen.

There are no secrets here. Building custom chatbots for high-end clients means adapting to the technology they have on site. After all, no one wants a chatbot that lives in a vacuum or to have to log in to another piece of software. Integration is key.

That being said, depending on the solution, things do not have to get super-integrated. Typically, our initial build for an enterprise client is a lightweight proof of concept. A standalone software package that doesn’t have complicated (if any) integrations into current systems. We build it to test if their users/staff want to use such a service and for the company to see the potential reward, before going down the path of time-consuming integrations.

Tell us about your process of designing the conversation flow. Tell us a bit about your team.

Designing the conversation flow, like anything else we do when we build chatbots, is a step by step process.

Our first step is always to define what we call the chatbot’s One True Goal. THE thing the chatbot is there to do, the reason for its existence. What does this chatbot need to do?

Once defined, we go through a user story exercise. During this exercise, we identify all the moving parts, isolating all the users that could interact with the chatbot and what they want from it. It helps us map out each user’s path to the One True Goal.

Finally, now that we have both the chatbot’s One True Goal and a clear understanding of its users, we design the conversation flows. Like anything else in design, we aim to get the user from greeting to achieving the goal in as few steps and as quickly as possible.

Then, once we have nailed this all down, we extrapolate the conversation flow, map out the technical features and APIs the chatbot will need, and we get to work.

We’d like to know your thoughts on the future of chatbots and how their role?

The future is bright for chatbots. Now that we are past the hype and almost out of the disillusion phase, we are heading straight for productivity. Ironically, this gets me hyped.

I believe we will see rapid growth in chatbot usage driven by SMEs.

At the moment, effective chatbot technology remains accessible only to large companies. Little by little, though, the technology will become more accessible. We’ll hit mass adoption within the next couple of years. It will become normal to talk to your local coffee shop’s chatbot.

In the enterprise space, chatbots (conversational software!) will live at the center of the business and across all departments. We are already seeing this happening with our clients.

I do not see any sign of this adoption slowing down any time soon.

Read More: Helpshift Unveils SensAI: AI Tech Designed Specifically For Customer Service

What does ubisend offer to sales and marketing teams?

I see chatbots as a fantastic marketing tool. Marketers are always looking to more effectively engage their audience, reach potential customers, and help them notice their brand. Over the past few years, we have seen a growing awareness of conversations between businesses and potential customers. The lean movement means entrepreneurs no longer build a business in their garage before looking for customers. Today, we all value customer feedback more than anything else.

This trend is spreading into marketing too. Marketing is no longer just about blasting email lists or throwing billboards in front of millions of eyes, hoping a pair of them will buy a product. It is about conversing with the customer, getting to know them, giving a personality to the brand. It is about understanding needs and problems and talking about how your solution is best for them.

Chatbots are the perfect tool to achieve that.

In terms of sales, there is again a lot to be said about a chatbot always being present. It does not sleep, get grumpy or turn up late. The most obvious sales chatbot solution is a tool that sits on an e-commerce website and handles all inquiries. The chatbot knows everything about the catalog of products, makes recommendations, up- and cross-sells 24/7 — at scale.

What is your suggestion for other bot makers? What changes would you like to see in the community currently?

Let’s try to all move away from the word ‘chatbot’, shall we?

What kind of analytics can enterprise look to gain from ubisend’s chatbots?

Much like integrations, analytics are unique to the enterprise we work for.

With every chatbot build, we supply our clients with access to a suite of tools through our ubisend platform. Our platform does not send and receive messages. In it, we log basic analytics (delivery rate, open rate, click rate — the normal stuff) along with advanced reports. Depending on the privacy requirements of the chatbot, a business can watch conversations in real-time, use sentiment analysis and pull out key common questions/problems and issues across every channel.

As we go through the One True Goal exercise with a new client, we map the critical steps we need to measure. From the initiation of the conversation all the way to success, we will track every key interaction. When we ship a chatbot, we also deliver a custom dashboard, specific to the business needs of the client and based on the specific metrics they need to monitor and hit.

Thank you Alexandre! That was fun and hope to see you back on AiThority soon.

Read More: How Do You Close The Diversity Gap In Technology Companies?

The post Interview with Alexandre Debecker, Chief Growth Officer – ubisend appeared first on AiThority.

]]>
https://aithority.com/interviews/ait-gametheory/interview-with-alexandre-debecker-chief-growth-officer-ubisend/feed/ 1
Interview with Paul McGough, CTO, Qwyit https://aithority.com/interviews/ait-gametheory/interview-with-paul-mcgough-cto-at-qwyit/ https://aithority.com/interviews/ait-gametheory/interview-with-paul-mcgough-cto-at-qwyit/#comments Wed, 16 May 2018 13:00:13 +0000 http://melted-cable.flywheelsites.com/?p=10216 Paul McGough

The post Interview with Paul McGough, CTO, Qwyit appeared first on AiThority.

]]>
Paul McGough
GAMETHEORY
Paul_McGough

Paul McGough, Founder and CTO of Qwyit, LLC, a leading cryptosecurity technology firm, is a telecommunications expert with over 35 years of progressively responsible experience managing IT technology teams for the development, integration, implementation and support of financial, project management, database applications and security systems.

Qwyit Logo
QwyitTalk Security as a Service platform provides the proven TLS process using our patented authentication and encryption protocol. Any network, any application, every communications product can instantly offer secure, private messaging by simple, universal connection to our globally available QwyitTalk SaaS platform. Our performance and efficiency are order of magnitude improvements over current TLS methods.

Tell us a bit about yourself. What made you get into the business of cyber protection? 

Even while growing up, I always focused on Science and Math. It was a natural progression to gravitate to the explosion in personal computing and technology that began in the early 80’s. I worked in radar manufacturing, moved to DoD government telecommunications programs, and then worked in the classified communications world for over 10 years. This is where the original ideas for our Qwyit authentication and encryption methods began – I sought simpler means to accomplish what took massive computing power and lengthy messaging streams to accomplish: secure communications. I thought, “There’s got to be a better way to do this!”

The core thinking began by studying trust; how it works, for whom does it work, when and where it works. Then I learned that combining unique provably secure mathematic equation sets with simpler pass-through and federated trust models provided an order of magnitude improvement over the current techniques. Qwyit has blossomed from there.

Which industries see a higher tendency to be victims of a data breach? What are the factors that contribute to being a target? 

It’s all about The Money! Obviously, the new cryptocurrencies, their exchanges, are ripe – and rich – targets. All of the other victims are spread across many industries and include everything from small businesses to those large incidents with which we’re all familiar. And they have several things in common: something of value is stolen (actual monetary theft like re-directing a wire transfer), leveraged (like identity theft) or replicated (for instance credit and/or banking info). The obvious contributing factors are the amount of data available (Experian) or the high dollar value (cryptocurrencies, individual retirement assets). After targeting value, the main contributing factor is ease – how lax is the security, how fast and how often can it be accessed and moved, etc.

If your data has any value, whether individual elements or collectively in total – and you’re not being security conscious and diligent with its protection from both the customer and employee perspective – you’re a target. Sooner or later.

How does enterprise protect itself currently? How effective are these measures? 

The current protection systems include a widespread combination of policy, processes and practices. Policies include, and range from, employee work and computing requirements, data handling, customer reporting legal standards, etc. Processes cover the breadth of the organization, and are mandated for types and levels of employee, for physical access, device specifics and requirements, the installed software and update intervals, etc. Practices cover the organization’s depth and include all the actual specifics of customer and employee Internet connectivity, email handling, operational device management, data integration, transaction processing, etc.

Unfortunately, the effectiveness of a comprehensive, diligent pyramid of protection can only be measured in the negative – if you haven’t been attacked, it’s working…up until it doesn’t. And then ‘it didn’t, so what do we do now?’ needs a recovery process. I’ve told this story before: The University of Maryland’s graduate offices were attacked, and afterward, the President told of just having updated all of their protection with ‘new’. So then, he said, they’d have to ‘double their efforts’. Not knowing what that meant, I came to the conclusion that while a comprehensive protection application is important, in order to actually increase effectiveness, the tools themselves need an order of magnitude improvement.

Tell us about three of the latest trends in cyberthreats, and help us understand them. 

My entire focus – and that of our authentication and data encryption methods – are all about end-to-end security; so I’m not a detailed examiner of trending threats. And the reason is simple: there shouldn’t be ‘trending threats’. What can help you to understand this, is that attacks are directly related to the strength of the protection method. Just imagine a cardboard box in your closet holding your money – or it’s in the world’s strongest metal box on a platform in the middle of a military installation surrounded by active, armed soldiers. Right? The highest possible number of ways to steal your money exists in the weakest installation; the method of protection limits attacks. ‘Trending threats’ exist – and will continue to do so – if protection is only focused on stopping existing or imagined attacks, instead of strengthening the actual methods used for protection so those attacks don’t exist.

The best protection is end-to-end. For instance, if I encrypt my social security number and only give out the encrypted version, it doesn’t matter how any recipient stores it. It has no value without my key. And if the storage location needs to ‘use’ it, they ask me for the key. As long as the transmission method, and the authentication of the requestor, used unbreakable methods, then I can give them that key.

Then I change it and re-store the newly encrypted version. My SSN is protected end-to-end, and the access is controlled – and identifiably liable for any damages. If all SSNs were stored and accessed this way – there’d be no value in stealing 140million of them; as an attack, it simply doesn’t exist. End-to-End stops ‘trending threats’!

 

Read More: TruGreen Partners With IBM Watson For Ads That Recommend Personalized Lawn Care Plans

How do traditional cybersecurity measures hold up against increasingly sophisticated e-criminals? 

There are several different reports output about US cybercrime. Every one of them indicates that cyberattacks are the fastest growing crime. Period. If it was bank vault robbery, you’d rightly assume that ‘traditional measures’ were no longer working: the criminals have figured out how to melt steel! But what is happening electronically, is that a convergence of the increasing amount and value that is being transacted and stored, is meeting headlong with an increase in the sophistication, understanding and capabilities of those bent on stealing it. So some of the crime is ‘melting steel’ (attacking traditional measures), while some is just an increase in capability – and creativity.

I just happened, today, to be at my bank to wire some funds to a home builder. Lo and behold, about three hours after accomplishing the transaction at the branch, I received a call from their National transaction center, saying my transaction had been flagged – and held – because of a recent scam involving wired funds and home transactions (because of the substantial value!). Seems the criminals scan open emails from identified home-related businesses, garnering personal info, wiring instructions, etc. – and create fraudulent emails with instructions to send to their accounts instead. There is no recovery from this – everyone loses. My bank’s diligence, and personal involvement, verified the transaction. So there is at least one ‘traditional measure’ that is still effective against cybercrime: personal, authentic, service.

The disturbing aspect is the cost, and the unavailability of providing this ‘fix’ to daily, lesser valued assets – but these are just as important to all of us individually. As I stated previously, the real fix is in end-to-end security: if I sent the wire to the wrong place, they wouldn’t have been able to use it without the key to open it – and that would have been sent, authentically only to my builder. We’d just wire it again, to the proper destination – no harm, no foul. And no cybercriminal would ever have created such a scheme because there’s no value! Oh – and how about end-to-end email security in the first place…

Can you talk us through authentication versus encryption and why one might confuse one for the other?

Authentication is making sure something is what, or who, it’s supposed to be. Encryption is using something to hide a message (encrypt) that only someone who has the same ‘something’ can use to reveal the message (decrypt). In electronic communication, that ‘something’ is a digital key. While using encryption, you may confuse the two, if what is being hidden is authentic!

Suppose you and I are throwing a surprise party for our friends next week – and we get together and share a ‘key’ that we’ll use to hide our messages to each other in planning for the party (because our friends always shoulder-surf and read our emails when we look at our phones!) When we send messages back and forth, we use our key for encryption and decryption. After I send a message to you, you can read it – but how do you know I’m the one who sent it to you? Maybe I wrote down our key, and one of our friends saw it! That’s the authentication part – and how it’s different than the encryption part. You’re sure it’s a real message because it decrypts into what you expect: but how are you sure that I wrote it?

In today’s systems, the authentication is covered by a different exchange using Public Key technology; and then that authentication exchange includes sending a new key that we’ll use for encryption in subsequent messages. As you can see, this answers your question about any message that includes the authentication exchange – but if we don’t do it every single time, can you really be sure it’s me? Unfortunately, this dilemma exists in today’s Public key: authentication isn’t used in every message…and that is one of the ways cybercriminals can attack.

Are AI-companies more at risk? If so, how? 

I don’t believe AI companies are at any more risk than any other – but their AI technologies are so incredibly vulnerable it is frightening. I’ve used this example before, and its pertinent here: when in recent memory, in all the stories about the AI evolution in Autonomous Vehicles, have you read a single article that instead of questioning capability or morality (the Trolley Problem), you read about the security of technology to thwart outside attacks and interference? Do you remember the recent, famous hacking escapades of a couple researchers who took control of a late-model car while it was being driven by a human? What does that say about what could/will inevitably happen when a fleet of sitting-duck AutoCars are just idling in some garage? AI technology is going to revolutionize something, alright – but it isn’t travel, it’s hacking – as someone makes those cars tootle around wherever they want! The danger of the weak security is absolutely startling.

There is no greater risk to the acceptance of this revolutionary AI next-step commuter’s dream than using traditional security system authentication and encryption methods: just one single hacked crash fatality will, rightly, destroy the $ Billion industry before it even begins. And this is just one of the first AI endeavors. The risk greatly exceeds the current security.

Read More: Interview With Jeffry Van Ede, Co-Founder & CEO At Simplaex

What trends do you see in the cybersecurity community and what can we expect in the immediate future? 

The absolute best trend is awareness and diligence – and a motivated, concerted effort to try new ideas. Things like Machine Learning as applied to cybersecurity, SAO marketplace maturity, efforts in the IoT marketplace are all immediately at hand.

How does Qwyit’s technology offer protective and responsive functions to protect their customers?

QwyitTalk, our Security as a Service, is the first and only TLS methodology improvement since its inception 20+ years ago – it also happens to be the only unbreakable security technology available today. TLS is that lock on your browser indicating secure communication – and it’s the only global secure communications protocol. We provide the exact same authentic and encrypted time-honored process – but we provide it as a uniform, universal service that any network can instantly join and then deliver unbreakable communication security for their application customers.

QT provides every participant in secure communications with an order of magnitude technology advancement over current TLS: For networks – performance and efficiency. For business – unbreakable assurance for data control. For users – simple and flexible use without maddening new ‘stuff to do’. For developers and administrators – straightforward, streamlined, universal, uniform implementation. For everyone – unbreakable end-to-end security.

How has mobile banking and cryptocurrency changed the game? Are we in more danger now? 

More danger? In a word for both: YES. There are two major security issues with mobile banking – the mobile part (the connectivity networks, the devices, the integration, the providers, etc.) and the banking part (even more integration, transaction networks, hardware, participants, etc.) The sum is more than the parts: the difficulty in providing end-to-end security requires substantial methodology improvements – and without them, as more and more of the banking value (transaction volumes, and amounts) are performed using mobile, the more attack points and profiles will surface. The convenience can’t be beat: security needs to step up.

Cryptocurrencies, while still operating in a cloudy regulatory environment and as demonstrably shown in the constant huge monetary attack losses at exchanges, are a nightmare waiting to turn into a daylight tragedy. The main danger is that the best of the security technology industry isn’t focused on improving this market; as its sustainability as a viable business partner is lacking. There certainly is danger lurking there…just ask Mr. Wozniak.

Thank you Paul! That was fun and hope to see you back on AiThority soon.

Read More: Interview With Laszlo Kishonti, CEO At AImotive

The post Interview with Paul McGough, CTO, Qwyit appeared first on AiThority.

]]>
https://aithority.com/interviews/ait-gametheory/interview-with-paul-mcgough-cto-at-qwyit/feed/ 1