Listen on these platforms
Brief summary
Looking Glass isn't like most other technology trend reports. It doesn't just tell you what deserves your attention, it's designed to help you use it to focus on what really matters to you. Published once a year, ºÚÁÏÃÅ intends it to be a tool that helps readers make sense of the emerging technologies that are going to shape the industry in the months and years to come.
Ìý
In this episode of the Technology Podcast, lead Looking Glass contributors Rebecca Parsons and Ken Mugrage trade hosting duties for the guest seats, as they talk to Neal Ford about the most recent edition of the Looking Glass (published in January 2024).
Ìý
They explain what the Looking Glass is and outline some of the key 'lenses' that act as a framework readers can use to monitor and evaluate what's on the horizon.Ìý Covering everything from AI to augmented reality, this conversation offers a new perspective on emerging technology to help prepare you for 2024.
Ìý
Ìý
Explore Looking Glass 2024.
Episode transcript
Ìý
Neal Ford: Hello, everyone, and welcome to the ºÚÁÏÃÅ Technology Podcast. I'm one of your regular hosts, Neal Ford, and we have yet another of our sort of hybrid setups today, where my normal regular co-host, Rebecca Parsons, is joining me, but she's actually here as a guest, so I'll let her introduce herself.
Ìý
Rebecca Parsons: Hello, everyone. This is Rebecca Parsons, Chief Technology Officer Emerita. I have to keep remembering to say that, and as Neal said, I'm one of your regular co-hosts, but I'm here today to talk about one of our publications called The Looking Glass, along with my co-contributor to this, Ken Mugrage, who is also one of your normal co-hosts.
Ìý
Neal: Let's talk about Looking Glass. This is a regular publication produced by ºÚÁÏÃÅ, and apropos to its name, it is looking into the future of tech. What is Looking Glass?
Ìý
Rebecca: Well, for several years, quite obviously, since we're a technology company, we've put together a tech strategy, and we focused that first on what are the things that are actually happening, and that naturally grouped itself into several, what we now call lenses within the Looking Glass. That's not terribly interesting as a strategy, because it's simply facts.
Ìý
Now, it does probably vary from our perspective, because we're going to bring our own lens to it, as opposed to, say, an Accenture or a Deloitte, but those are still facts, they're just what's happening. The important part of the Looking Glass is the concrete advice. We try to divide the various trends into these lenses, which encapsulate really a business problem, and includes our recommendations on, these are the kinds of things you should think about if you're going to respond to this as an organization.
Ìý
Neal: In much the way a lens focuses light, this focuses this area of technology around the things that are interesting.
Ìý
Rebecca: Exactly. An individual trend might appear in multiple lenses, and we might even talk about it differently, because of the perspective that we're taking because of the lens.
Ìý
Neal: The nice thing about the lens metaphor, too, is it's a view on technology. Let's talk about what's on this variation of Looking Glass.
Ìý
Ken Mugrage: Sure. I'll start with the first one. We have five lenses this year. The first one we're calling AI everywhere. Now, that probably doesn't sound like a surprise to a lot of folks. Again, if you look at most trend reports that come out, they talk about what happened in 2023. What we really want to focus on here is AI in the large, or traditional AI, or however you want to call it. It's not just generative AI.
Ìý
That is certainly very important, but there is more to the world than just LLMs. In this lens, we really talk about leveraging these breakthroughs to scale your business. Of course, we look at it for software development. We don't think it's quite the panacea for creating code that some do, but it's not too terrible. It's pretty good there. Also, just what are the other uses? How can it make individuals more efficient?
Ìý
One thing that resonates with me is, we have a thing called the terror of the "blank page", which is really just getting started. I'll dump some bullet points into a chatbot or whatever, and try to get some ideas back, and just get the juices flowing. We really think that enterprises need to be looking at AI really as a whole, not just the generative thing. Again, it's not a miracle cure for anything.
Ìý
Then, the last thing I want to talk about in the AI one, we have this section in the lenses that we call Signals. This is, if you will, the evidence. What have we seen that makes us believe this is real? Of course, for AI, that's not hard to prove, right? It's everywhere. It's in the news everywhere and so forth. One of the things we're really looking at is LLMs speaking new "languages," in air quotes.
Ìý
The ability to do things, like we worked on a project called Jugalbandi that takes a voice in a language in India, and records it and translates it, and then gives them feedback. There's a whole lot more into it on services that are available to them. It's the fact that we're seeing the things expand out there is one of the biggest signals.
Ìý
Rebecca: The next lens I want to talk about is our Platforms lens. To be honest, every single version of our tech strategy, we've had something about platforms, because we do think a proper platform strategy, proper platform thinking, provides a very firm foundation for teams to be able to deliver business value faster, experiment faster.
Ìý
We've gone through lots of different iterations of our platform lens, looking at platforms that participate in ecosystems. If you think, for example, about something like an Expedia platform, and most hotels really don't have a choice about participating on that platform, and they've lost some agency, because you've got these marketplaces.
Ìý
We've talked about the fact that many platforms haven't delivered the value that they could, in part because there's a lot of confusion about what a platform means. There is a platform business model, which many people think about, things like Uber and Lyft, where the entire business model is based on the existence of one of these platform ecosystems.
Ìý
Then, you also have developer platforms or infrastructure platforms, which are quite low level, and provide common technical services that are, probably, pretty much domain-agnostic. Then, in between these two, you'll have a business capability platform, which can deliver, make it easier to deliver business value within a business domain. Here you'll see ideas from domain-driven design come in.
Ìý
We've played a lot with this notion of how to do platforms right over many issues of our internal tech strategy, then also in the first few editions of the Looking Glass. In this case, what we're trying to bring home is the notion that platforms really are foundational. As AI is continuing to explode, what do you need for AI? You need data. Most people's data, if you don't have a data platform strategy, is inaccessible.
Ìý
We're looking at how do we combine a data platform strategy, a technical platform strategy, a business platform strategy to enable organizations to truly take advantage of artificial intelligence, and be able to provide value, both to the business and to their customers, as a result of the data that they have.
Ìý
Neal: I think it's really valuable for us to define platform because our Chief Scientist, Martin Fowler, defines this idea of semantic diffusion, that once a word gets used too much, the definition starts diffusing, and platform is so suffered from semantic diffusion, because every client I show up at, they have a different internal definition of platform. Actually stating what we mean by that, and then elucidating the benefits of it done well, I think, it's a good public service.
Ìý
Rebecca: Yes. It is critical because it's-- One person talks about, "We finished the platform, and it's great." "Okay. I want to deploy this new customer service application." "Oh, well, we can't do that, because we deployed a developer platform." Part of what we're trying to do with this continued emphasis on platforms in the different issues of the Looking Glass is to really bring home the fact that, you have to be clear on your platform strategy, and you can't do a platform strategy, unless you know what you're talking about. Okay. Now, I'm going to turn it over to Ken to talk about the third of one of our lenses, evolving interactions.
Ìý
Ken: Yes. Evolving interactions, it's interesting because you may have noticed, listeners, that these all tie together, and there are dependencies between them, and so forth. Actually, that's one of the things we find most interesting about Looking Glass. Evolving interactions is people interacting with computers and technology.
Ìý
Last year we had one on Metaverse, which we predicted would die a horrible death, and we're glad it has pretty much. It's not just extended reality and AR and VR, and it's also voice interfaces. Like I said, the lens is tied together. Chatbots are really getting a lot better, and we think they are going to continue to get better for our clients, but especially, when backed by proper data platforms, and powered by AI that's looking at the right data sets and so forth.
Ìý
We have a group inside ºÚÁÏÃÅ that's really looking at AI very hard, of course. One of them showed me an experiment that she did recently, where you could ask questions and it would point only at Martin Fowler's website. It was really interesting because then you got to really see what our answer to that question was. From a chatbot or an LLM saying that, that's really cool.
Ìý
At the same time, she used a different library, switched over the same query, and asked it who Elvis was, and it knew, but Martin Fowler's never written about Elvis. That's why it's really important that it's good data, and everything behind these interactions. Also, we said in Looking Glass one or two years ago, I'm not sure which, that we were really watching out for the Apple product, that was-- Could really shake up the market with their Vision Pro.
Ìý
We're lukewarm on it. There's a bunch of us that want one for entertainment on an airplane, and that sort of thing, but we're not really seeing a lot of enterprise applicability right now. That being said, that doesn't mean for extended reality we're not. I was talking to a retail client the other day, and we were talking about extended reality. They're like, "Oh, but I don't want virtual try-ons and stuff."
Ìý
We're like, "Well, what about fulfillment? What about your people in your warehouse wearing very simple glasses that say, 'turn left' or 'turn right'?" Incremental improvement is actually a really good thing. Again, from a signals perspective, we're seeing it everywhere. Of course, Apple reviewing that, or releasing, excuse me, their product. Just the advancements in natural language processing, the advancements in gesture control, it's set to take off.
Ìý
I urge people not to get too discouraged by the fact that Metaverse isn't like how we're all living today, because Rebecca likes to tell a story, like, if I'm going to buy insurance in the Metaverse, what does that mean? Do I stand in a virtual line behind a desk? I men, what's the interaction like? All of that's not solved, but that doesn't mean that you should overlook the technology as a whole.
Ìý
Neal: Just a quick follow-up to that. We saw the overnight success of AI after quietly working in the background for 20 or 30 years, and suddenly--
Ìý
Rebecca: 50 years. [chuckles] Sorry.
Ìý
Neal: Do you think we're approaching the same inflection point for augmented/virtual reality? Is Apple finally joining the market, the harbinger of that? Are we still a bit away from the big revolution there?
Ìý
Ken: I think we still have a few years of "it will break next year", because we say that every year, right? From a consumer perspective, from everyone wearing glasses in their home, and really interacting that way, and having it be a sales channel, and that sort of thing. It's like a lot of inventions. Electric cars are mainstream now. Tesla's first car, some people may remember, was a two-seat roadster that was really, really fast, and really bad at everything else.
Ìý
We make incremental improvements. I think when it comes to these, it's like, get your data platform in order, have an AI policy, and do a chatbot for internal information. We have some things internally for checking that. There's all kinds of incremental improvement that we can make, both in extended reality, digital twins, the list of things that we can do that aren't flashy necessarily, but they're effective, they reduce risk. There's a whole host of things there.
Ìý
Neal: Okay, let's move on to the next lens then.
Ìý
Rebecca: Okay. What we're talking about here is a convergence of digital reality, and physical reality. We have had sensor networks for decades. We had the notion of the internet connected toaster, and I could never figure out why you would internet connect a toaster. I mean, there are many other things I could understand, a toaster was never one of them.
Ìý
We've been talking about what can you do as you get more and more information from the physical world? Ken referenced digital twins a moment ago. We are seeing applications where you can model, say, a factory in the virtual world, and you have sensors and actuators all over the factory. Now, this is what I want to do. You can do it in the physical world, and it will be reflected in the digital world.
Ìý
Then, all of a sudden, you get to see, "Well, did this turn out the way I thought it would?" One of my favorite examples of this, there is some agricultural research going on in India, where they are looking at putting moisture sensors and chemical sensors in the soil, and tying together a reading of what the soil is like in a very small, maybe like a one-meter square area, and have these repeated all over the fields, tie it in with the weather reports, and say, "Okay, given what I think is going to happen in terms of rainfall, and the state of the crop based on aerial photographs from drones, and what the sensors are telling me the soil is currently like, I want to turn on the irrigation for exactly this amount of time. I probably need to put in this amount of that fertilizer in that one square meter area." Then you do that.
Ìý
Then, you examine, "Okay, did the crop respond the way I expected?" Now, of course, if the rain didn't come that you expected, then you'd have to take that into account. You start to be able to refine your model of how do changes in the physical world really manifest themselves? What are the interventions that I can make? Of course, the advantage of things like that is you save water, you save fertilizer, and you reduce the pollution from runoff from unnecessary fertilizer.
Ìý
We have all kinds of applications where we want to blur that line between the physical world and the digital world. I want to be able to influence the physical world from the digital. I want to be able to understand what those interventions have done, first, in the physical world, and then reflect that back in the digital world, and use that to resolve my, or refine my modeling.
Ìý
We can also use this for "what if" prediction scenarios, if you're looking at how can I get a particular manufacturing run, where these are the things that I need to do, what is going to be the most efficient way, given tool changes and material changes and such? I can simulate it in the twin. As long as there's good fidelity between the twin and the real world, I'm going to be able to do experiments, that it would just be ridiculous to do in the physical world.
Ìý
I think as we continue to explore advances in things like augmented reality, where a person can actually participate in this digital world, and examine things, et cetera. Also, the increased power that we have at the edges. It used to be the sensors were pretty dumb, and your actuators were probably turning something on or off. That was about it. We've got much better controls.
Ìý
We have much better sensors, and much more powerful sensors at the edges, where we can do much more complex analysis, as we're building this digital twin. This is probably, again, still a ways away. We're seeing more signs of this still in things like manufacturing, supply chain, fulfillment, et cetera. There was the big splash about, everybody's going to have an internet connected home, and you're going to walk in, and you're going to talk to your lights, and they're going to do certain things.
Ìý
Of course, your internet connected toaster will have made your toast when you get up for breakfast. I don't know how that works. [chuckles] I think there are really practical implications for this, particularly in, as I said, manufacturing, fulfillment, and things of that nature. Looking at this convergence is going to be critical.
Ìý
Neal: I think the, "Your house is going to be connected, and your toaster will have an IP address," is very much the same that Henry Ford said, "If he asked people what they wanted, they want a really, really fast horse."
Ìý
Rebecca: Yes.
Ìý
Neal: That's your house with these kind of things is a really fast horse, basically. I think it's not even that far in the future, because a very closely related project, we do this public Katas thing on the O'Reilly site, and we just recently did one for a startup company that's all open source, and open hardware called Wildlife.Ai. They can track big animals in rainforests, but they can't track insects.
Ìý
What these people have made is a little dish-shaped sensor. When the insect crawls across it, they photograph it, and use AI to identify it. Now, they're for the first time ever, being able to track the migration, and movement of insects in the rainforest, which is giving them tremendous insight for something that was impossible to check as a physical person, because you can't follow bugs around, you can't put radio collars on them. This is a great example of being able to create a model that so accurately reflects the real world, that you can make some real interesting decision to.
Ìý
Rebecca: I think both in this lens, and as well as in the evolving interactions lens, we do have to reimagine how we solve problems. Who would have thought I'm going to track insects by putting a physical pad there, and the insect is going to walk across the pad. Okay, well, and similarly, in my example of, what does it mean to transact in the virtual world? We understand how to do collaboration in the virtual world, and how to do gaming, and entertainment, and social interaction, and even training. There are some fascinating applications around remote maintenance, where you've got the expert who gets to sit in the nice warm office, and you've got the poor apprentice, who is out there freezing to death, but actually having to fix it.
Ìý
That apprentice is being able to rely on the knowledge of the expert. Those are all relatively easy to conceive of. We've got a lot of creativity yet to be applied to, how do we really solve problems? What does XR, what does gesture recognition, what do these sensors and actuators in the real world, and our ability to blur that line, what can we do with that?
Ìý
One of the things I find fascinating is, yes, Moore's law has given us all kinds of additional compute. Every time we get the additional compute, our range of the things that we want to solve just gets bigger. I suspect we're on the cusp of that. I would expect some of these things really to take off when somebody figures out, "Mm, I can do that now." I can do it this way using this technology.
Ìý
I think we're in that holding pattern, waiting to see where those experiments are going to come from. The final lens, if you will, over the years, we've had lenses associated with privacy and security. We had the rather provocative one for the last couple of editions talking about the reality of hostile tech. People really keyed off of that word "hostile".
Ìý
If you think about in the context of transacting and doing business on the open internet, it is pretty hostile out there. You've got all kinds of different agents who are looking to steal your data. or denial of service attacks and all of that. We also had some things around sustainability. What we thought we'd do instead for this time, is roll it up under a broader umbrella.
Ìý
This is the umbrella of responsible technology. ºÚÁÏÃÅ has been very focused on this area for several years. Our initial focus really started in the privacy, security, net neutrality kinds of discussions. It's blossomed from there to start to look at the premise that, when we build technology, we need to take into account, not just the people that we're looking at right now whose problems we're trying to solve, but who are the people who are around?
Ìý
What communities are being affected? Importantly, how might people misuse our technology? I don't think any of the social media people set out to think that they were going to start genocides in faraway countries, or create misinformation. That was, if you stood back from it, not a stretch of a conclusion to say, "This is a way that a malicious actor could misuse my product." We need to start thinking about those things.
Ìý
This whole banner of responsible tech is to say, it is we, technologists, we are responsible. We are accountable for both the intended consequences, but the unintended consequences of the technology that we create and deliver. We are very bad, in fact, at predicting. Because again, we're problem solvers. There's my problem right there. I'm going to focus on my problem.
Ìý
We need to figure out, how do we lift our head up? Think about, how might a person of color feel dealing, interacting with this product? What is the impact on the community where some of these materials are going to be sourced from? Again, how might a malicious actor use this? Who am I forgetting? Accessibility falls into this category as an example.
Ìý
We tried to wrap many of these different things under this banner of responsible technology to make the point that, given how involved all organizations are in the use and the deployment of technology solutions to create business value, but also to create value for their customers, all of those organizations need to take into account, what are those unintended consequences?
Ìý
What can we do to mitigate them to the extent possible, or even to imagine that they might occur, and see what we can do about it. It's very easy to go all dark and negative when you talk about this. I also think about the fact that, sometimes lifting your head up and looking around, and it's like, "Oh, wait a minute, if I make this minor tweak, I've got this whole new market over there."
Ìý
This doesn't just have to be, "Let's make sure additional bad stuff doesn't happen." This can also be, "Let's enable additional good stuff to happen." You never know if all you do is focus on who you're looking at, and the problem that you're trying to solve. By talking about this from the perspective of responsible tech, we can roll in concerns of, are we properly protecting people's data privacy?
Ìý
Are we doing the right things to secure our systems, so that our data isn't stolen? Are we doing the right things around green software engineering? Are we taking into account the carbon consequences of the compute that we do? All of those things fall under the umbrella of responsible tech. This seems a nice way of talking about those things without having to hone in individually on, "Okay, this is what you ought to be doing in green computing. This is what you need to be doing in carbon accounting. This is what you need to be doing in terms of security."
Ìý
It's like, let's bring it up a level and say, "Think about it from the perspective of responsible tech." Are we being accountable to all of the stakeholders of our technology solutions, not just the ones that are immediately obvious?
Ìý
Neal: Yes, there's a great book called What the Doormouse Saw, which talks about how the sunny optimism of the hippies in the '60s infused a lot of the foundations of the internet now. One of the great examples they gave was email. If it added a caller ID to email, we would not have spam now. [laughter] Those guys could not imagine someone would take something as obviously cool and novel as email, and use it for something that wasn't cool, and supporting because of this sunny optimism.
Ìý
You're right, we do get into problem-solving mode and get tunnel vision for, "Ooh, this solution is the thing I'm after." The thing I'm talking about a lot around this is the ethics of the decisions we make as technologists, because especially with AI now, it's not just technical decisions now. Those have ripple effects and affect people in positive and negative ways that we have to start paying more attention to. I think it's really valuable to call this out as something to think about on a regular basis.
Ìý
Rebecca: Yes, the potential consequences of a decision gone wrong. I remember back when you used to actually get advertisements in the physical mail, I went and picked up my mail one day. I was in my early 30s. I had something that was a catalog for new mothers. I had a packet inviting me to join the American Association of Retired Persons.
Ìý
Okay, so a tree died [laughter] because of that mistake. It's not like it affected my life. AI systems are being used in the criminal justice system, in the medical system, in the financial services system, and those impact people's lives. We can't have those same kind of, "Oh, that doesn't really matter," when we're talking about that level of decision-making.
Ìý
Neal: Simple mistakes can have massive rippling side effects in a way that you really don't want to have to deal with. We need some sort of Hippocratic Oath for technologists now, right? [laughs]
Ìý
Rebecca: Yes. Well, ironically, there are several attempts starting to come up with a code of ethics for technologists. We haven't really settled on one yet. Different organizations like IEEE and the Association for Computing Machinery, and there are various organizations out there who are starting to take a shot at what might this look like?
Ìý
Neal: Well, this is probably a good time, because originally doctors didn't have Hippocratic Oath until they learned enough to really start injuring people. It is like maybe we should make sure that [laughs] this is only going to be used for good. Maybe that's-- Our time has come in the software world now as well.
Ìý
Rebecca: Yes.
Ìý
Neal: All right, fantastic. That's a great quick summary of The Looking Glass. All of this is consumable in a much more thorough, much more organized format, [chuckles] because this is a white paper, it's available on the ºÚÁÏÃÅ website, and it will be linked in our show notes for our podcast. Thanks so much, Rebecca, for giving us overview, and Ken, thanks to you as well.
Ìý
Ken: Thank you, and just one little plug, we will be following up with the content throughout the year about how the lenses interact with each other. Some of us think that's going to be the most interesting, not that the stuff there isn't interesting now, but better stuff coming.
Ìý
Neal: Fantastic. All right. Thanks so much.
Ìý
Rebecca: Thanks, Neal. Thanks, Ken.