Listen on these platforms
Brief summary
Open source contributors and maintainers play a vital role in the technology ecosystem. But what's it like to develop and maintain an open source tool — especially one that thousands of other developers use and depend on?
Ìý
In this episode of the Technology Podcast, Srinivasan Sekar and Sai Krishna join hosts Rebecca Parsons and Scott Shaw to discuss their work on AppiumTestDistribution, an open source tool that supports test automation framework Appium. AppiumTestDistribution won a LambdaTest Delta Award at the August 2023 Testμ Conference.
Ìý
Listen to Sekar and Krishna explain how the project emerged, how they approach maintaining and evolving the tool and what it takes to be a part of an award-winning open source project.
Episode transcript
Ìý
Rebecca Parsons: Hello, everyone. Welcome to the ºÚÁÏÃÅ Technology Podcast. My name is Rebecca Parsons. I'm one of your recurring co-hosts, and I'm joined by Scott.
Ìý
Scott Shaw: Hi, I'm Scott Shaw, coming to you from Melbourne, Australia. I am also one of the hosts of the Technology Podcast.
Ìý
Rebecca: I'm joined today by two colleagues. Srini.
Ìý
Srinivasan Sekar: Hi, everyone. I'm Srini, and I've been with ºÚÁÏÃÅ for a little over seven years. I'm an open-source enthusiast, a contributor, a maintainer of Appium. My journey with Appium is a little long, probably eight-plus years, and my journey with open source is also a little longer. It all started with a small change of node module in Appium's ecosystem — an Appium user, Appium member, Appium contributor and also founder of Appium's Open Source Conference as well. Yes, that's me.
Ìý
Rebecca: Excellent. Sai?
Ìý
Sai Krishna: Hey, my name is Sai, and I work as a Principal Consultant at ºÚÁÏÃÅ. It's been close to 10 years hitting ºÚÁÏÃÅ now, a happy journey. Like Srini said, I'm also sort of an open-source enthusiast who has been contributing to open source for the last roughly about eight, nine years now, and I've been actively doing it as well because I believe I've learned a lot from the community. One way of giving back to the community is open source, and I've been continuing to do that.
Ìý
I'm also a co-maintainer and a contributor to Appium. I've done quite a lot of contributions to Selenium, Gauge by ºÚÁÏÃÅ, Taiko, and many other open-source tools. I've also presented in quite a lot of conferences along with Srini, Appium Conference and another one which both of us put together, Found. That's about me.
Ìý
Rebecca: Excellent. Part of what we want to talk about today is an open-source project that you all have worked on that won an award. You won the Delta award in testing during TestMu, and this was hosted by LambdaTest. Congratulations, but why don't you tell me a little bit about AppiumTestDistribution, how it came about, how it relates to the Appium ecosystem and what problem you were trying to solve?
Ìý
Sai: Appium is more on mobile automation testing like how Selenium is for Web. Like many other projects, we jump in, we write a framework, and we try to automate it. At the end of it, when we want to plug it to the pipeline, one goal that we all look for is faster feedback from your pipelines and time to market. That was being in the mobile space. Appium is one such tool to automate mobile applications for both iOS and Android. But there were no real straightforward ways to speed up this process, like running things in parallel, setting up an ecosystem about different mobiles.
Ìý
Also in the mobile world, it's usually different because we have different defined fragments, like iPhone has got different screen resolutions, Android has got vast screen resolution, so you definitely need to test your application on different devices, OS versions, defined fragments. There are other points to this where we really want to test before an app goes to production. This was really a pain, and that's where, when me and Srini were brainstorming, probably this is a space that we need to solve because that was one thing which was always asked by the community, "How can we run test in parallelbecause we want to get quicker feedback and things?"
Ìý
When we were brainstorming this part along with another colleague called JD, then we thought, "Okay, why don't we build a solution? Because there's nothing." Me and Srini were contributors to Appium, so we know the entire Appium ecosystem. We had ideas how we can solve this problem, and that's when we started scribbling and we said, "Okay, let's start this project." ATD [AppiumTestDistribution] was born that way not to solve our own problems, but we wanted to solve the community problems. Then it ended up solving a lot of problems in a lot of projects in ºÚÁÏÃÅ, outside ºÚÁÏÃÅ, and things. That's how ATD was actually born.
Ìý
Srini: Yes. Just add to Sai, ATD has been gone through a lot of evolution along with Appium's own architecture as well. During the course of journey, the initial problem was making things simple for anyone who wants to run the tests parallelly. During the course of journey of around eight years, we have solved quite a lot of other problems and added it, and we created an ecosystem out of Appium Test Distribution as well.
Ìý
Scott: You started eight years ago on this?
Ìý
Srini: Yes.
Ìý
Scott: It must have changed a lot over that time. It was a different landscape, I suppose eight years ago.
Ìý
Sai: Yes, Scott, definitely. Eight years back, the focus was only about running your test in Android and iOS. The way technology changed, the way we started moving into AR/VRs, we started moving into IoTs. [audio breaks] Definitely along the journey even Appium changed, saying, "Hey, why can't we automate other applications. Why is it only focus to mobile? Why can't we automate AR/VR application with Unity driver? Why can't we automate an IoT device? Why can't we automate a Raspberry Pi?"
Ìý
Things evolve, and as of today, the way Appium and ATD is tightly coupled, today if you want to automate Raspberry Pi, you can automate with Appium. You don't have to go hunt for another tool. If you want to automate AR/VR application, Appium is a go-to. Say that you want to automate, let's say smart televisions. Applications came into smart TVs as well. We can automate your smart television applications using Appium. Appium, over this course, like you asked, it's spread its wings towards just not mobile but across platforms, across applications as well.
Ìý
The same way even ATD started off with only local execution to cloud execution, then it moved into remote. When we were working with one of the projects, we bumped into an issue where we had a few devices sitting in India region, we had a few devices sitting in UK, but we want to utilize those devices as well for execution. We said why don't we bring these capabilities of doing remote execution in ATD.
Ìý
Irrespective of the region, the framework or the library should be capable of running these tests across regions. It's not bounded to one system where this is typically running. It evolved, and it's evolving because there are so many asks from the community saying, "Hey, why can't we do this? Why don't you support this feature?" We're still supporting the community on those as well.
Ìý
Srini: Architecturally as well it has evolved along with Appium as well. Appium initially it's very difficult for someone to contribute to Appium because of the way the code has been written. Quite recently, Appium moved into a plugin-based ecosystem, so Appium Test Distribution also evolved to solve the other problems in the community. One of the problems that I remember is setting up all the environment required for Appium test to run on a device is crucial and it is very time-consuming, so we introduced a plugin as part of Appium's Test Distribution ecosystem called appium-installer that's very intuitive for people to just select what they wanted.
Ìý
It could be Android emulator or iOS simulator they wanted to configure or the Android environment variables that they wanted to configure. They just go ahead, select that from the utility that they wanted to configure, and go ahead with it rather than going ahead and searching over internet to solve the problems. Now we have a utility which is appium-installer plugin, that solves the problem. We as an Appium Test Distribution contributors have introduced quite a lot of plugins that solves people's problems in the community.
Ìý
Rebecca: It's interesting you use the phrase architecture because that's kind of where I was going to go next. When you think about the work that you've done in the Appium Test Distribution ecosystem, are there other systems that inspired your approach? What would you say your fundamental architecture is for this ecosystem? What was your rationale for choosing what you currently have as your implementation model?
Ìý
Srini: Going back to the main problem that we have solved, which is parallel testing, initially there were no tools that solve these problems, at least on top of Appium, so we wanted to solve it first for Android, I mean running the tests on different Android devices that we have configured against a Mac host or Windows host. We wanted to as well use what we have already in the community as test runners, so we started with TestNG, which has already been supporting creating multiple instances parallelly for any test that runs. We wanted to utilize that and build on top of it.
Ìý
We also got support from TestNG community, the TestNG maintainers as well, when there were problems, or when we need any solutions that we wanted to enhance it or use it in a better way. We started with that and then we've gone ahead and see how we could solve the same problem for iOS devices. Then the problem evolved, and we wanted to solve it for remote devices together, so we have two different Mac hosts having different sets of devices sitting somewhere. We enhanced our architecture to solve that problem as well with the open-source tools that are available already in the market.
Ìý
We chose TestNG initially, then later on we saw that community has been using Cucumber as well for running the test parallelly, so we supported the test runner. It all started in Java. Appium Test Distribution has been an enhancement of Appium's Java client and plugs in the parallel portion for a longer time. We wanted to make it available for any non-Java persons as well because Appium ecosystem has different client bindings, not just in Java. It also has client bindings on .NET, Ruby, so we wanted to make sure the people the community use this framework across.
Ìý
Since Appium evolved their architecture to a plugin-based ecosystem, we moved all of our code to the server side, not just as an enhancement to the Appium client, so all the heavy lifting of what all devices connected to my host, where exactly it is, whether it is in the cloud or it is in the remote Mac host or a Windows host, all the heavy lifting is being done as part of the server now through a plugin in the Appium's ecosystem so that the blocker with respect to having only Java client users use it has been gone away now. Anybody in the community can use it since Appium has client language modules on several other clients' languages as well.
Ìý
Sai: Rebecca, like Srini said, we never had any inspiration earlier because there was no solution, and this was eight years back, but I can say that right now a lot of people are inspired by this project, and they have made a commercial model on top of this with the same idea, and they picked it up. They made commercial models, and a lot of cloud vendors also reached out to us to say that, "Hey, you know what? Can you support our cloud execution?"
Ìý
There are a lot of cloud vendors we have like BrowserStack, Sauce Labs, LambdaTest, HeadSpin. We also support these kind of Cloud vendors support into ATD as well because users are there with these clouds, and they want to utilize ATD to orchestrate this entire execution model in the clouds. It's the other way around today, but eight, nine years back, it was not the case.
Ìý
Scott: Yes, I was going to say, we've kind of seen the emergence of these cloud-hosted fleets of mobile devices for testing. What's the advantage of using Appium Test Distribution there as opposed to just their native tools?
Ìý
Sai: Scott, in every cloud that we have today in the wild, they just run on licensed space, so you buy the license. They say, "Okay, I'm giving you a cloud of 30 devices for you," and that's it. They also give you support to say if something is not working, your device has shut down, they'll probably restart it.
Ìý
Now, the way ATD ties up over here is at end of the day, even though we have these cloud, N number of devices, when you run your test in your pipeline and connect to these kind of cloud services, you have to orchestrate your test, you need to maintain your test sessions and stuff because in the world of the cloud, a request which enters the cloud, clould would only respond to that.
Ìý
You say, "Give me this device and start running the test." It will simply do that to us, but the entire huge orchestration mechanism is what ATD takes care, which the cloud won't take care, and it doesn't add sense to the cloud to do that because their entire model is very different to one there. It is very important, as cloud users, for us to do this because of the cost, because some licensing depends on how long you're using a device. They charge you per minute, per hour.
Ìý
In these kind of models, we need to be very conscious of saying, let's say, I'll give you an example here, you might have 30 devices in the cloud, and if you simply want to run your test in parallel with no mechanism, it will choke all the 30 devices for this mobile automation. You really don't need 30 devices. Probably you need only a selective five devices or eight devices. Maybe these kind of mechanism is what ATD takes care, this entire orchestration for us, which by default the clouds really don't care.
Ìý
Srini: The costing model of several cloud vendors is also depending on how many number of parallel sessions that you can create. You are limited in terms of creating those parallel sessions as well, be it manual or automation tests that are running on those devices, but with respect to ATD if you have any devices hosted somewhere, then there are no restrictions in terms of how many parallel sessions that you can create.
Ìý
It all depends upon the resource that we have. In terms of cloud, it's completely a different ballgame because costing comes into the picture there with respect to number of parallel sessions that one can create.
Ìý
Scott: Have the vendors been supportive of your efforts? Do they take an active interest in what you're doing?
Ìý
Srini: Yes. Over the period of years, Sauce Labs and LambdaTest have been really supportive to us. Quite recently, LambdaTest also has supported us in GitHub through sponsors. They are really supportive. They didn't treat us as their competitors, to be honest.
Ìý
Rebecca: I was intrigued, Sai, when you were talking about the fact that you've expanded this from just mobile testing to also IoT devices, Unity-based devices, Raspberry Pis, how difficult was that to incorporate? Was your architecture such that connecting to a completely different kind of device was a relatively simple exercise and it was more just working out the communication with the device. How did you go about approaching that?
Ìý
Sai: Initially, the focus was only Android and iOS, so the entire architecture was only about Android and iOS [audio breaks]. If you see the code base, it's all about Android, iOS, and nothing more. When the first change came in, seeing that, "Okay, let's start supporting IoT, Raspberry Pi." We were experimenting this in ºÚÁÏÃÅ Koramangala office where we got this Raspberry Pi, connecting it to the machine to see that the device can turn its orientation and things. When we wanted to plug these changes into ATD, it was not possible.
Ìý
That's where we caught up with a few of our tech leads within ºÚÁÏÃÅ to say, "Hey, we have built this solution, but when we want to expand it, it's a problem. Can you help us how we have to re-arc this and make it like a plug and play? Later in the game when something comes, how do you plug and play?"
Ìý
Similarly, how we run a project, we started brainstorming with the tech leads in ºÚÁÏÃÅ office, and that's how some of the ideas and some of the recommendations from people were like, "Okay, why don't you guys read Head First Design book? Why don't you read the Clean Code? This will all help you write better code and also start thinking more on how you can re-arc better in the future."
Ìý
Discussing with a lot of our colleagues, the first re-arc when we did, we got ideas like, "Okay, tomorrow if there's a change, it should only be a plug and play. It should not be you move around across your entire ATD architecture and change it." We learned slowly, talking to our colleagues, working with them, in fact pairing over weekends and stuff where even people have shown interest within ºÚÁÏÃÅ to work with us to evolve this.
Ìý
As of today, if you ask how quick it is for us today to make any changes to ATD or plugin ecosystem, I would say it's very simple. Don't touch the core, but create a plugin, just like a Lego game, we just keep plugging in Legos and we are good with it. That's where we are today.
Ìý
Rebecca: Where do you think you might take it next?
Ìý
Sai: The next stuff that we are currently working actively even now is, expand this boundary not only for mobile automation. You have a ton of devices plugged in. We are seeing how we can explore and bring in an easy way for people also to interact with devices. Let's say you have like five devices plugged into one machine. How can I sit at home--? Similar to the cloud model today, where you can set up your infra as of today for test automation, but we are seeing how we can explore that into the mobile space as well.
Ìý
Also there's a very interesting idea that me and Srini are brainstorming now is also, we're talking all about automation, but we are not trying to solve a problem in the manual testing, the exploratory world, when it comes to mobile. I'll give a-- What I mean by this is, when we talk about test automation, it's all about parallel, we are trying to save time, but why aren't we starting to think the same way in a manual testing in an exploratory fashion, especially in the mobile world? To give an example, let's say I have like about five devices which I need to run my test on for a sanity check.
Ìý
These automations can functionally test everything, but to make sure that these apps on these five devices work fine from a non-functional aspect, how do they scroll, how the gestures are working, is it smooth, is it laggy and things? Even though we have these kind of CAs, we definitely do some level of sanity check, probably a very quick five-minute or one-minute sanity based on the [inaudible] or on a handheld device to see how it performs.
Ìý
What me and Srini are exploring right now is, "Hey, we need to build kind of a solution where you perform your action on one device manually holding the device in your hand and that same action should be replicated on the different devices at the same time. That way, even my exploratory time across my devices can get cut short. That's something what we are trying to solve next, which we have not seen an open-source solution until today. We are trying to solve that problem now.
Ìý
Scott: I'm just trying to understand, is the driver then the manual device rather than code? Somebody has a device and they're taking actions and then those actions get replicated.
Ìý
Sai: That's one. Slightly exploring to that, have done this in the web world, Scott, something called as replay testing. A lot of us do that as well, and we have done it as well. We are also trying to see from the events that the users use on their mobiles, like keying in the username, passwords, scrolling, adding things to the cart. Can we capture these events and export that as a test suite itself for you so that your test suite also gets ready from the journey itself. We are exploring these options to see how we can make this entire mobile testing, both automation as well as exploratory testing easier and quicker.
Ìý
Srini: Other interesting backlog that we have, are coming up with plugins for other problem statements: what kind of API calls that your app is making and what was the response and requests for those? How do you control those requests using any proxy? We can do this using mitmproxy also, but how do you do this in an automated fashion, plugin mitmproxy? Change your responses. How does your app behave on the fly? How about creating a plugin for it?
Ìý
Another problem in Appium world is, it's one server that handles all the parallel requests. That means it creates so many parallel sessions, so identifying a problem or going through the logs is very cumbersome, so how we could use any models to solve the problem of, which assertion has handled this specific request when you have so many devices running on a single server parallelly? That's also another problem statement that we are looking into.
Ìý
Rebecca: Yes, that's something that people don't often think about. They're so focused on how can I run all of these tests and get the green light as quickly as possible, but when the red light comes, how easy or difficult is it to pinpoint "Okay, why is this test not green? And such." We like to focus on that happy path where it always turns green [laughs] and don't always think about the challenge of pinpointing why this particular one went red.
Ìý
Srini: Agree. Agree.
Ìý
Scott: I'm curious how much support you've gotten from the community. Are you two the only committers or are there other people, you're getting a lot of pull requests?
Ìý
Sai: We have quite a few contributors, Scott. Right now, I'm seeing we have 39 contributors with the repository. There are some contributors who are active even today raising pull requests. We are seeing new contributors also coming into the repositories asking for some changes. We encourage them when you know what the problem is, why don't you create the pull request by yourself?
Ìý
We are also encouraging people come get into this open-source world. We also spend some time with them if they don't know how to start an approach. This we do both for outside ºÚÁÏÃÅ as well as within ºÚÁÏÃÅ as well. Even two days back, someone was on Bench, and they said, "Hey, I'm on Bench, I'm on Beach, so I want to contribute. Can you help me?" I see this issue, but I don't know where to start." We also help them in some of our techniques.
Ìý
Srini: We also have done some hackathons in the past. That helped us to gather more ideas from people, more pull requests from people. It also gave some interest from them to become a contributor. Several of them were first-time contributors that helped to start thier journey on open source as well.
Ìý
Rebecca: Well, it sounds like too the work that you did on the re-architecting to be able to support the other devices probably resulted in a code base that is easier to widen the base of contributors. That's good as well.
Ìý
Srini: Yes.
Ìý
Rebecca: Well, I want to congratulate you both again for winning the award. I look forward to continued innovations. ºÚÁÏÃÅ has a long history of involvement in test automation in trying to broaden the applicability of test automation. I particularly like your exploration of the how can we replicate exploratory testing. I thought that was quite an interesting idea and one that is quite important when you have this proliferation of devices, which we have in the mobile world.
Ìý
It's not nearly as critical in, say, the web world as an example. Thank you both Sai and Srini for joining us today. Thank you, Scott, for color commentary as always. Thank you all for listening to the ºÚÁÏÃÅ Technology Podcast.
Ìý
ÌýSai: Thank you everyone. Thank you, Rebecca. Thank you, Scott, for your time.
Ìý
Scott: Thanks.
Ìý
Sai: Thank you.
Ìý
Srini: Thank you.
Ìý
[Music]