Ron: Thank you, everyone. Now for something a little different. I think we're climbing the stack as we go. I'm here to talk to you about best practices for working with big analytics, big data, and ultimately analytics on top of it. I'm Ron Bodkin, Founder and CEO of Think Big Analytics. Pleasure to be here tonight. Maybe I'd start a little introduction about who we are and my background, before getting into some of the insights, ideas that we've seen.
We're the first and leading professional services firm that's purpose built for big data. We started the company about two and half years ago. The sole mission of helping the enterprise really take advantage of big data, and the advanced analytics are available on top if it. So we've constructed our firm and our approach entirely around the new challenges and opportunities that presents.
I'll share with you some of the insights and the differences that we see, but we think that really has a profound impact. That this is a new game, it's a new day in so many ways, and there's a lot of opportunity that can be tapped. As part of doing that, we've been one of the fastest growing big data start-ups. We were listed as one of the pure played leaders in big data in Forbes Magazine earlier this year. Certainly, we've been seeing a lot of great results.
I'm going to talk more about what we've seen, what are the best practices, lessons, and a few more generalized use cases. But we're doing work for some amazing customers. Some great customers here in technology firms, various kinds, some of the leaders in different areas of technology, retail, financial services, advertising and marketing. So no surprise to the point of Jim's slide. There's use cases that broad across many industries, and we think there's a lot of opportunity here.
That's a little bit about our background. I came to start Think Big... Ultimately, I put together the founding team and some of the key first hires of the company were people who'd worked with me way back in the 1990s. A company I'd started out of... I left the MIT PhD Program. Lead the time to the finals of the Entrepreneurship Competition, and joined Cambridge Technology's incubator to be the Founding CTO of a company called C-bridge. There's a similar business model of being a purpose built systems integration firm to help the enterprise use that new thing back then, the Internet, to really do business.
And I think there's a lot of parallels in the terms of the disruption and the opportunity to create value by having the Internet as means a means of communication. That many of those seem... ideas and parallels are playing out in the big data space. In fact, you might even look at the big data wave as being the cherry on top that's capping off. All the connectivity and all the data because available with the Internet built up this pent up demand of data that you could now do thing things. And it was just a lag. In this case, it's been a lag of about 15 years to get to some of the value that could be created on top of that.
We're applying a similar, nimble approach, and I'll talk a little bit about that. In between my time at C-bridge and today, two and a half years ago when I started Think Big, I became the VP of Engineering of a pioneer in big data called Quantcast. Quantcast today has about 50 petabytes of data, processes about a million events a second, 100 millisecond or less latency for real time bidding and the advertising exchanges, and was a pioneer in disrupting the online measurement model. Saying that, unlike Nielsen and comScore, that were using small panels, Quantcast used statistical models to be able to measure every page view and to be able to accurately put together the audience characteristics of millions of websites and ad campaigns.
Out of our experiences at Quantcast, both building this large scale big data system and the predictive analytics that lay on top of it to create these new products. I felt there's so much opportunity to apply these same techniques to the enterprise. That leads to what we see as a three phases, or waves, of big data.
On the left, you see that the first wave was really the Web Scale pioneers that invested these techniques. Because they have problems like search indexing, display advertising, and audience measurement that they just couldn't possibly... [inaudible 00:04:18] recommendation engines couldn't possibly be handled by the traditional database technologies. It was cost prohibitive in many cases. They weren't flexible enough to perform the calculations.
So big data was invented as way of solving these challenges. Then something interesting happened. Those companies started to innovate and invent at a rapid pace using information. Starting to get more intimate with their customers. Starting to personalize. Starting to rollout new businesses and disrupt industry.
So what we saw is the second wave of big data that came in the form of invention and disruption in a number of industries. And this has triggered the third wave, which is the enterprise getting into the game and realizing that there's both an opportunity to apply these same techniques. To use all this information the enterprise has. To start to gather and use information more effectively. And also a mandate, a threat that as industries are being transformed around information economics, the enterprise can't wait. But needs to also arm itself and have some of the same capabilities.
We're seeing that, not only in the form of case studies from some of the various firms in these industries that I've talked about. Whether it be financial, retail, telecommunications, or life sciences. Not only are we seeing case studies, but we're also seeing these waves of investment. So the first wave of investment was really the web scale pioneers, often funding open source. The second wave of investment has been the wave of venture capital coming and now being funded by commercial adoption from customers. Indeed, even government research for fundamental invention. As, of course, Xerox PARC is famous for and also doing research.
We see these waves rolling out as big data has enabled these big analytics. So what are the best practices? Before we get to that, one thing I'd like to lay out there is... A little bit of what we see out there is the seven myths of big data. So you hear a lot about what big data is. But I want to show you a few things of what we think it's not. That are sort of chestnuts or ideas that out there.
One of them is this notion that it's really just another name for business intelligence. There's a lot of organizations that haven't yet made a transition to really looking at what they could do with big data. They might put together an environment and point a legacy tool at it, or just do a little bit of ETL in it, and then feed a traditional environment. It's not just big business intelligence. You may have heard the joke, "What's the data scientist? It's a business intelligence professional who lives in California," right? We're not talking about that.
The second thing, the notion that packaged applications are just about to emerge. There's a lot of people who are out there talking about how big data is going to immediately be followed on by wave of having packaged apps that do the interesting use cases right on top. You just plug it in, and you're ready to go. We've profoundly think that's not going to happen for a long time because these applications, the ways you can work with big data, are creating competitive advantage. They're differentiated. In most cases, the variety and complexity of the data means that it doesn't have a simple schema. That the way it works for you is different.
So we think, while there's a lot of entrepreneurs that are raising money to go out and build the packaged applications for big data. That the situation is rather like it was in the late 1990s. When a wave of application service providers started out with the idea that would quickly get cloud-based applications in front of the market. And it took about 10 more years before you started to get real traction for a whole new breed of applications that were designed from the ground up to work in the next context - software as a service. We think the same thing will happen in big data. It's going to take time to figure out how to package and create reusable artifacts as use cases mature.
The third thing, the enterprise, can wait. We think there's really a great opportunity this year and next year for the enterprise to get competitive advantage and run into the space intelligently with a thoughtful strategy. But that window of opportunity to have initiative will quickly evaporate as [inaudible 00:08:29] organizations will instead be under the gun, and be chasing their competition. Dealing with unpleasant impacts on their value chains of those that have moved sooner.
The fourth is the notion that low-cost, low-skill staffing will work. We really think that's a throwback to a model and a mindset of the last decade. I don't think anyone put it better than [Geoffrey Moore]. In talking about big data, he says, "Look, in the last 10 years, the enterprise IT has been on hold. It's been about cost cutting, commodization, packaged rollout. It hasn't been about invention and new creation of value."
This is a new game. We're talking about disruptive, new applications. We're talking about creating new offerings. We're talking about something where you have to align it with a business, technology, advanced analytics, mathematics. So those things come together need to have alignment with easy communication and high-skill. And I think Jim put well in terms of, there's some tremendous opportunities here, but you don't get these results by gutting it out with a low-skill team with a commodities skill set. It requires some sophistication.
Number five, it's simple to get results. There's great value here, but let's say that right now, big data is still leading edge. That's why there's this explosion of innovation of great, new ideas to make it better, more mature, lift the boats, and get to more and more stable and mature results. But today, the technologies are still leading edge. They're not really established in mainstream. Moreover, the patterns and the mindsets in how to organize, architect applications and make things work have not yet settled in.
Learning how to write on a whole new data fabric is a lot more complicated. Organizations have been used to writing database applications for 30 years. Changing the assumptions of how data gets stored and worked with is a big deal. It just doesn't happen because you attended a one-week class on some APIs that you suddenly understand that shift.
The next one is a fascinating one to me. The notion that you could automate all the intelligence. There's a meme out there's going to be automated tools that will obviate the need to even do data science. That you'll just plug in a magic service in the cloud, and that you'll feed your data, and out will come all the answers. I don't think [HAL] is going to be visiting us anytime soon. I think this one, of all the misses, the one I feel... I was most surprised reading about it in the press. That people were promoting this idea. Probably because organizations are so tired of trying to hire data scientists, that they're just hoping there's an answer in a box. Even though it seems ridiculous to say there could be.
Finally, the last one, you can buy it off one vendor's stack. That this is an era of innovation. It's about integrating best of breed components. It's about building on standards. So single vendor stacks are not the path that we see moving forward.
We're big believers at Think Big in the methodology of really incremental adoption. One of the key things that we don't want to see customers fall in to this 6-12 month cycles that Jim described in terms of prolonged rollouts of capabilities. We think that a lot of it comes from an approaching of envisioning of having a roadmap and planning ahead how you're going to execute as the best practice. Educating incrementally and building up capabilities in the organization. And finally, a nimble engineering process of rolling out capabilities quickly.
So that's our basic way of structuring our service offerings to suppose those different stages. And that's how we think about best practices. Before I get too much into best practices in each of those phases. I thought it'd be good to level set... talk a little bit about some of the kinds of use cases and patterns.
One of them that we think obviously quite important is interactions with the customer. How do you start thinking about putting together data? Whether it be web data, CRM data, advertising data, mobile data, feeds from different providers. What we see is there's long been a vision, a goal, a hope of having a more complete customer view - the 360 degree view of your customer. A way of holding together all this information.
But in the classic warehouse world, it was elusive. Because you never had the agility to normalize all the data, to organize it, to get it into a form where it quite fit together. The enterprise was changing faster than your ability to assimilate, integrate a coherent model of the customer.
What's different with big data is you can now load the data in a more raw form and analyze it based on the needs of a specific use case, and then invest only in the experiments that are working. So you can start with simple things like finally doing some rough correlation across many channels and basic reporting that was elusive in the old system. And that's just table stakes. We've had customers delighted with that as a starting point, but then move on to do more advanced segmentation and modeling, and starting to optimize and personalize their interactions with customers.
So you see there's a range of use cases where you can get started quickly and start to build on value. And it's driven by, not only a diversity of data sets, but an opportunity to build intimacy with customers to create value in your brand and interactions.
So that's one frame, one kind of use case. An example of that, that we think is interesting, is... if you look at social media. It's obviously a popular dataset, and many companies are looking at it. We think American Express' recent efforts in this space are instructive. Where you see them actually be willing to pay money for data for linkage, being able... Offering special promotions for customers that link their American Express card with Twitter so they have a connection of those identities, or likewise of Facebook.
So you see this is starting to get into, not only tying together these big data sets of massive amounts of credit card data and massive amounts of social media, but also a deliberate strategy of sourcing data through a promotion campaign to build intelligence in the organization. We think that's instructive. That's just a little bit of a lens to drill down into an interesting use case.
Since our talk today is about best practices and not about a series of use cases, I'm not going to talk about a lot of others. But I do want to touch on another theme we see. Not only do you see a lot of consumer facing 360 view data, but we also see a lot of machine data. A lot of machine to machine analytics. So whether you see network providers that are doing advanced security analysis and detection with data science models. Or whether you look at massive scale auto support for devices that are on a network that can allow for better service, support, and cross sales.
Or whether you look at internal network data for managing, optimizing, and even license compliance. We've worked with customers with some massive scale with machine data. So it's not just about the things that users are doing. In fact, ultimately, we think the volume of data that machines generate will dwarf the amount of data that human directly generate in the big data environment.
Indeed, a good example of this in the social media context... "Zuckerberg's law," as he modestly phrased it. Is that social media... The amount of information shared on social media doubles every year. Some of that is due to increased activity, increased adaption of social media. Some of that can be because of things like more precise devices. But a lot of it, over time, is happening because there's more and more automation. The more you go and do things, it just automatically gets shared. It gets put out in the wire and brought into a social context. We think that trend is going to continue too. That the growth of machine generated data being shared is going to generate more opportunities as well.
With those two overall ideas of both customer 360 data and machine data as anchors, ways of thinking about you work with data. Let's talk a little bit about some of the best practices approaches to actually going after creating value from data. In the first stage: thinking big. We are big believers in having a roadmap, an envisioning approach for "how do you get to value?" We see this is diametric opposite of many organizations that come from a bottom-up of, "I've read about some of this technology. I'm going to stand up at technical environment, and then I'll figure out if there's some value in it."
We think it's great for people to do experimentation, but as a leader in a company, there's a much better way of proceeding. Which is to really take a creative look at business opportunities that can be enabled by your data. What is your data? What are some of the initiatives? It takes some work because it's not simple. Organizations have been so long conditioned to accept that they have little bits of data, and they couldn't certain things, certain ways of looking at things couldn't work.
So it's a process of thinking through, "What are others doing? What are some creative ideas? What if I didn't have some of my constraints? How could I use data, and ultimately analytics to automate my response?" So putting together initiatives and targets based on what can work, what's been done, then tying the envisioning process of building a roadmap.
We really see the next key step is putting together an analysis of the current capabilities of the organization and match it to what can be done to put a roadmap together for what to rollout. Some of this has been alluded to earlier. Like in Jim's slides, where he was talking about, "Look at these tradeoffs of different layers in the architecture. And look at the different deployment options." The answer is, typically... The right choice for an organization is going to be context dependent. It's not a one size fits all. You need to match the capabilities and the choices you make to the business requirements and the use cases that you're trying to solve.
So you want to have a vision for where you're going, what your roadmap is, and tie that to a rollout of capabilities. All based on a notion of having flexibility in the architecture. Because the one thing we can bet on is that a lot of these technologies are going to be transformed and changed over the next few years, even as the work horses that are creating value for us in the enterprise. We think, you not only want to have a technology strategy, but you also want to have a strategy in terms of how you develop your organization incrementally. What are the new rules and capabilities?
One thing I want to call attention to that's often overlooked is that part of this is actually a strategy of how you create value out of data. What are the data sets you already have that are valuable? What are the data sets that you can source that are valuable? How do you integrate and combine data to create value? There's a certain irony in this because, of course, Infochimps was originally conceived around enabling the value of integrating multiple data sets together - having a market place.
The thing we see over and over again is enterprises are looking at, "What's the value in our data? How can we combine it with other data sets to create value? And how can we ultimately change our mindsets?" So often, IT's mindset around data has been, "Let's limit the amount of data to minimize our complexities in managing it." Instead of, "Let's look at all the data that we might be able to use to create value for the enterprise."
When you start thinking about it that way, you start thinking about big data as a strategic weapon and think about what information you can access. And how you might even share information with your suppliers, customers, and partners to create value for your enterprise.
Going from that, start smart. We think it's also really important to have an approach for how you work organizationally. How do you put together the different teams? A lot of what's going on around big data is a transformation of roles. It's not only skills, but it's how people work together. One of the more notable elements of that is this notion of data science. The notion that you've got specialists whose job is to dig into data and create value of it. But it doesn't stop in manual analysis.
So often, we hear the notion science presented as you've got a really smart person who's doing brilliant one off insights into the business. We think that's a role for that in data science. But ultimately, you want to have a process of automating how you respond to data. So instead of traditional decision support, where you have people looking at reports and reviews, saying, "What am I going to do?" You have an automated system that data is feeding decisions, and you have a team that's working together to make those decisions better and better over time to optimize them. To analyze what's working, what's not. To write production software that continually feeds production flow to make that work better.
Organizing for success is a close collaboration between data scientists, engineers, and the business. Needless to say, that requires new skills. So not only do you need organization design. You need to have a center of excellence and approach for incrementally adopting. Rome wasn't built in a day. You don't overnight go from an organization that doesn't done this, to having everyone on your staff be experts.
But you need to cultivate expertise and roll it out, and that includes various approaches for developing skills. That includes bringing in outside experts. That includes training your existing staff. That includes making strategic hires for how to ramp your team and make them successful.
Finally, in rolling through the process of best practices and how do you implement real, radical results through big analytics. We're being believers, ultimately, in an integrated approach. For how do you tie together each of these elements? When you get to the delivery line. The final point where you're actually doing the engineering, you want to have it tied together that you had a strategy that was coherent you envisioned. You had good skills. And you do nimble releases of both engineering value and modeling analysis through data science.
One of the big things that we emphasize is getting value out quickly and testing live. So building things and doing releases, we encourage enterprises to build releases of value every eight weeks. Get something out there, and start to get feedback. Because it's a fast moving space. That doesn't mean everything you'd ever want to do can be built in eight weeks.
But what it means is you think hard about how to chunk things up, so you get something out and you can test and learn as rapidly as possible in those short cycles. Move quickly with iterations, and develop your teams. Develop the synergy between the strategy, listening to external market events, external technology enablers, new data sets, and your engineering approach.
In this world of an integrated approach to drive business philosophy, to learn and create value for the enterprise, we see that there's new IT platforms that emerge. We think a lot about the different technologies fitting together. A lot of what big data has done to architectures, to IT supports, is decompose them. The mythical, monolithic database has been broken apart into different data services that serve different purposes.
You've got batch analytics with specialized languages and tools may be running in Hadoop. You've got low-latency analytic access in MPP databases, and unstructured databases like HBase and Cassandra, and distributed indexing systems like Solr. You've got near real time response events from systems like Splunk or Storm. You've got scale out MPP... Or not MPP, typically NoSQL databases or SQL at smaller scale that are responding to mobile and web events where you... an interesting interplay among these systems. You're building models and scoring them in quick succession of micro batch in the batch environment, and pushing them out for response in the edge.
That could mean things like updating a credit risk. Or it could mean updating a propensity model to make a recommendation to a customer. Where you would have deep analysis in batch that scores the model across history and then short term triggers that are being updating state in near real time in a NoSQL database when salient events hit immediately. Like, "The customer just went to this page, so I know I'm going to recommend that product to them as they walk around the website."
This combination of technologies and patterns, where you're taking events from your real time serving systems into the big data environment, analyzing them and then creating new information to respond back out in near real time, is powerful. The IT team's job is to develop analytics and platforms to bridge that and to incrementally build out these capabilities across platforms and to serve a variety of use cases. Needless to say, there's a lot going on here. And it's not... Again, you don't to start at all. The more you can get automatic and simplify tools, the better.
Paring with that, I talked a little bit about other key best practices, which is to institute the process of data science. Data science is truly a unique discipline in most organizations. There's pockets of individuals that perform functions like this in the enterprise previously: people like Quants and Wall Street or risk analysts.
But so often, what we've seen is that traditionally, these specialized applications of math and statistics have not been focused on data and working with new information and creating new value. But instead of been a time honored tradition of tweaking established models for small variations in the real world.
Data science is fundamentally bottom-up. It's about creating value, and it requires a close collaboration with a business as you navigate new opportunities, creating new products, digging deeper, explaining phenomenon that were not understood. So it requires being a sleuth, a statistician, an artist, and a programmer rolled into one.
It's a new skill set, and we definitely see there are people from diverse backgrounds, often scientists who are good at this, and who are becoming data scientists in the enterprise. But even more we see, the emergence of graduates from college programs who are studying this, are excited about it, and are entering industry, are taking on the mantle of data science.
I think the other thing I'd like to [inaudible 00:27:11] is, we have a lot of enterprise customers that are proud about the first data scientist they've hired. Or the two or three they've hired. But we don't think that's the right order of magnitude. That you needed to have a lot more people doing data science and creating value out of your data.
When you look at the pioneers on the left of my first diagram of the companies like Google, Amazon, or my old company Quantcast. They had many people digging into their data, creating new products, and creating value on top it. Significant percentages of a small company, even the larger companies, would have percentages of more than several percent of the whole staff looking at the data, creating value.
I would say in the new information economy, that will be more of the norm. That companies will be investing more and creating value out of their information, and not have a handful of people looking at dashboards to make decisions.
So with that, our conclusions about best practices are: One, big analytics is a critical capability in the enterprise. Two, your organization can create a tremendous amount of value now from it. Third, you need to get help to get help to get off on the right foot quickly. And fourth, follow an incremental adoption approach.
With that, I'd like to thank you, and take just a couple of minutes here for questions before we turn it over to the panel.
Audience question: Ron, great presentation. You're smarter than I am. The suggestion that it's ridiculous that software would automate the process of discovering business insights. That there's no way that software, which is encapsulated in knowledge and automated application of that knowledge. Like an Excel spreadsheet, which my son uses to manage a business. He's nine. He's very smart, but he's not that smart. But he uses Excel, and it helps him.
Why is it ridiculous to suggest that software can let business people use powerful tools, visual tools, 3D modeling, kinetic based tools, what have you, to extract business value from these data sets, if there's enough intelligence built into those tools?
Ron: The "if" of... if there's enough intelligence built into those tools is a big one, right? I think you always have to be careful in defining your terms. There's certainly, is there a possibility of having better visualization and guidance for users? Yes. But at the same time, you take past attempts to give business users visual programming tools that solve a problem that's fairly simply from a technical standpoint. Like rules when an e-mail arrives, how do you respond?
Turns out, most non-programmers don't really use those kinds of interfaces that are very, from a technical complexity standpoint, very low. So now you get into the complications of, "How do I guide an automated system, and look at what it's doing right and wrong?" It's a problem that exercises some of the most brilliant people. Data scientists digging into what's going right and wrong in these automated models?
At some point, when we have artificial intelligence in the world, we might. But I think it's in that order of complexity of a system that you just dump in a bunch of data, and out comes business insight and automated response. It's so far from reality, that I think a little analysis says it can't happen. Does anyone here feel like the automated intelligence is right around the corner?
Audience question: Smart tools still need smart users.
Ron: You still need smart users. The tools can get better and help people. Every successful data science effort I've seen involves brilliant people working with great amounts of data to creative disruptive value.
Audience question: My feeling is the tools are for creating questions, not answer. That's what they're good at right now. You see companies that are hiring their first couple data scientists, or I think often handing existing analysts, PhD statisticians a copy of the elephant book and saying, "Have at it."
Can you talk about what do you see is the adoption curve in terms of size of the analyst team? Versus the amount of data under management and cluster size. What's people's trajectory through small clusters, only hundreds of gigabytes and terabytes of data on up. Is that a curve without bound? Or is there a part of that you think is the most populous?
Ron: I think that the adoption curve is a feedback look of value. We definitely see organizations that have been able to really get on big data and create tremendous value and have very fast ramps. Tens of petabyes of data and large teams working intensively with the data. But we also see organizations that have hit plateaus. Where they get to some level of value, but they didn't have the business support and the creativity to realize what they could do with data, and they plateaued in their initiatives.
So it's not by any means one curve. There's curves of great success. There's curves with gradual success. There's curves of delay that we see. Even companies that haven't started, so companies that don't even have a curve.
Audience question: You've had over 40 clients that you've worked with and taken them through the process. And you've done that in various industry segments. I'm curious as to which industry segments have been more nimble in going through this transformation. Really get it, and they get it fast. Specifically, if you could cite some examples as to ones that didn't. And the ones that lagged or are mired in something. Without naming the names of the clients. Just the industry and the application areas.
Ron: Yeah. I would say that the industries where we've seen probably the faster adoption have been industries that have had more technology and information focus. We've seen in media and advertising, we tend to see that there's lots of precedent. There's lots of urgency, and there tends to be some of the right skill sets. So there tends to be some fast adoption in that sector.
We've definitely seen also often in technology firms, technology product companies. You've got an interesting combination that the business buyer on the product or engineering team is typically fairly technical, and so it's easier for that kind of organization to get their head around these solutions and support it and adopt it.
The third industry where you tend to see fast adoption is around financial services. Where you typically have a culture of using information aggressively. You typically have meaningful pools of people that have math and stats skills. While there may only be a subset that are the right combination to be good at data science. If you have pool of, like one of our customers has, hundreds of [SAS] modelers and sent over a hundred of them to training we've done to teach them some of the basic tools.
And you see these... A meaningful fraction of them really latching on and being very effective, getting excited, and started to use the new techniques. Having a pool of hundreds of people with that background who could potentially start doing data science creates, in that case, an interesting problem of how do you gate the demand for people? There's so many people that want to get their hands on the new system. While it's cheaper, it's not free, right? Those tends to be leaders.
I would say that the biggest inhibitor do adoption is really that lack of sponsorship where you see more of a setting up an environment in the hopes that there will be use cases that follow. So more speculative, not connective with the business. That can be an inhibitor. I would say that's probably the most frequent cause. As well as the usual organizational dynamics in any large company that is trying to get their head around the disruptive change.
Audience question: All right. Thanks, Ron.
Ron: Thank you.
Moderator: One more question.
Audience question: My question is for you and maybe the group as a whole because the point you were making earlier about... Kind of the point you're making, you just can't shove data in a thing and it kicks something out. How do we get the industry educated? Because I was sitting in front of CIO of a $37 billion a year company. So you should know what's going on. And his exact words to me were, "Yeah. What I really want to do is just take this information, shove in Hadoop, and see what it tells me." That was his comment. I was like, "Okay. That's a good idea, but let's talk a little bit about what that means."
How do we, as a group of people who care about this industry, get the industry prepare for what this really takes? Maybe it's a challenge to the people here. We've got everybody here. We've got a lot of players in this space. Is there an opportunity for us to create something that can help communicate to the industry? Maybe it's something that's already been worked on or already been driving. But there's a big gap there.
Ron: I think the antidote to myth and misinformation is truth. So articulating coherently what's really entailed, dispelling myths. Talking about what's really working in depth. So being open, collaborative, and sharing is the antidote. Now the cause, I think the root cause for a lot of the confusion is that this is a hot topic. It's a hot space, so many technology companies, many organizations are looking to play in some way. Inevitably, you have a lot of noise in the market, and it's confusing. People aren't experts, and they're trying to figure out, "How do I sift out the true and the false claims?"
Audience question: Is there an opportunity for us to create some group that can help drive that discussion? I don't know... I'm throwing that out.
Audience question: A reality TV program? IT [inaudible 00:38:31]
Audience question: Sure, we'll call it that. Yeah, let's do that. I don't know. It's just a thought, and I threw it out there.
Audience question: [inaudible 00:38:37]
Audience question: Yes. I mean the space is so new, and there's so many misconceptions. Those of us working in it, the misconceptions are obvious to us. Maybe there's an opportunity to put something together. We've got a lot of people in the room. Now, I have to leave in five minutes, so it's up to you guys to figure that out.
Moderator: And when you do leave, we'll be assigning you a lot of tasks for that.