So let me just give you one or two examples of why that frankly is not good guidance. The first is you have to select the data set that you’re going to use. With the machine-leaning technique, human bias is front and center. What do you select? The second is you need to select a machine-learning technique against the data that you now selected, right? That is point of human bias number two right.
Then three, just to make it a nice round number is you then have to basically interpret and tweak the output. In many cases in machine-learning, a technique is actually used. What is the point of bias number three? You can't remove human bias from this, simply because you're throwing around the marketing term, visioneer, which is machine-learning right.
And it’s kind of just front and center of having some basis about what your claiming to understand. It stirred things up to be that declarative about why that was not terribly the best guidance. Ultimately what was interesting is that the people involved tried to turn it into me being you know not a nice guy because I pointed out some facts here. They never ever actually push back on the idea that you know what they said was not correct.
The reason for bringing this up is that anyone that makes assertions in this space that is unwilling to engage on their assertion or provide evidence for what they say is probably someone that doesn’t know what they’re talking about. You need to look at things people say very carefully whether it’s us or it’s other people in the space, consultants you or software sellers, anyone that makes assertions or claims that are big and sweeping, and that are universal.
#1 Ranking: Read how InetSoft was rated #1 for user adoption in G2's user survey-based index |
|
Read More |
Be very skeptical right, because in a real world there’s usually more complicated motives. Ask for evidence for why people are saying what they're saying. And if they answer in marketing speak, then perhaps you have reason to be concerned. In other words people whom are talking to you might actually have human bias also.
The first step in dealing with bias is admitting that you have it. I mean that anything that looks and are smells like a pizza is unfairly judged to be superior in my mind right, so I mean that the first step of getting over your pizza infatuation is meaning that you have a bias worth pizza right?
It is lunch time in my zone. Can you tell I’m hungry? I have a frozen pizza cooking in the oven. There’s one more myth I wanted to discuss. It goes along with the theme of machine-learning. This is a myth, you may score against models that you use a machine-learning to build in real-time. In other words we have found the following things when they happen in the same time sequence to be indicative. It’s like a robot that is about to introduce quality issues in your manufacturing line.
You're not building the model that says here is the correlation between these things. You are scoring against correlation that has been determined offline. Machine-learning is a very iterative none real-time activity. It may take days, weeks or months to get it right. You want to build a model and test the model.
You’re going to tweak the model. You’re going to go back and do it again. You're going to go and back score against lots of data. You’re going to rerun it. You are able to see how the model performs in practice to check if it is accurately working. The output of all those things maybe something that you can, in fact, do. We would argue really you should do this in real-time.
|
View a 2-minute demonstration of InetSoft's easy, agile, and robust BI software. |
Don’t let your organization get sick. Recognize the signs of illness and do something about it before the illness takes hold. You may recognize things that the machine-learning models have told you to look for in real-time, but you are not building the models in real-time. That’s just another case of not understanding what people are pontificating about right? You probably have a good reason to say that really doesn’t sound right, and usually it’s because a person doesn’t know what they’re talking about.
So now that I’ve debunked those Big Data myths I'm sure they’re going to come back up. No one asks me before they publish this stuff, which is fine, and to be clear I've been kind of flippant obviously annoyed that a couple people at a couple of firms have put just frankly very self serving guidance out there. The truth is what it is right. You know these technologies are disruptive. You suck it up and adapt to like we have.
You don’t try to market your way through it. Actually do the work to understand the technologies and use them in a right way. Ultimately that’s what’s required for customer success. Let’s focus on customer success. And so despite my clear annoyance at bad guidance that’s put out there, I will be the first to say that I’ve learned every day.
Hopefully all listeners don’t take away any more from this other than the fact that there is questionable guidance out there. You need to listen to the source and evaluate, how likely are they to give you their honest position. All of us, especially me, are learning everyday. The topics we cover and the guidance is going to evolve overtime. I hope our having some fun with these topics doesn’t give people the impression that we think we know everything because believe me and as my nine year old daughter reminded me this morning, there’s very little, in fact, that I do actually know and it turns out fashion sense is not one of them, just for the record.
That’s all for today. Thanks everyone.