We believe in E2E test automation
The end-to-end test automation company in Vienna, Austria
Since it was founded in 2012, QiTASC has been developing an easy-to-understand end-to-end automation tool to both sell and use ourselves. With a second location in Düsseldorf, Germany, and international partners, we are constantly growing and taking development of our products and services further. The founding team and family – Can, Denise and Michael – is still part of the company management. Behind the scenes, we cultivate a family atmosphere and appreciate the loyalty of our experienced team. It is this team that forms the basis of the constant improvement and success not just of our test automation software intaQt® but of QiTASC itself.
Left: Can Davutoglu (father, heads marketing/sales), centre: Denise Zehender (daughter, heads testing), right: Michael Zehender (son, heads development).
Our vision
To be the all-in-one test automation company
Founded by software developers, strategists and visionaries, our company follows a clear goal: to provide digital companies with a tool for identifying potential bugs right at the start of a project. We want to offer the customer an all-in-one-tool that combines software, hardware and support in order to achieve a faster result and higher-than-average quality in test automation.
Our mission
A testing process without manual interaction
We give testers a tool that simplifies the process of automated end-to-end testing and at the same time management reports that are easy to read and work with. Dispensing with the need for manual testing, we make imprecise testing, overwhelming amounts of test cases and unreadable results a thing of the past.
We want to live in a world full of high-quality products.
Our strategy
We automate every step of the product development life cycle.
Our strategy reduces the time to market by up to 40%. Apart from automation of every step – from network configuration, operation, verification, to reporting – we change the sequence of project development. When you shift what is usually the final testing phase from the back to the front, requirements can be identified long before the final testing phase is reached. That way, errors can be detected before they occur and decisions can be made accordingly. The results: shorter development times and higher quality.
The life-journey of our software tools
The QiTASC testing team uses the intaQt® software solution themselves. That way, our development team knows first-hand which features are needed in order to optimise even our own software package.
History of intaQt®
Can tells the story of our test automation framework. Find out how our software tools have developed with time until they have created the framework we now work with.
shownotes
History of intaQt: The history of the QiTASC test automation framework
Hello, my name is Can Davutoglu, I am the CEO of Qitasc, and I would like to give you some information about the history of our framework development.
We started our test automation framework by developing intaQt. intaQt is an automation tool. It is designed to grow. This means it can be improved by new interfaces and new features. We have designed intaQt in order to control a bunch of devices. We control mobile devices, we control VoIP phones like Yealink, Snom, Polycom. We control various CPEs. We control web UIs, tablets. But we have also generated hardware or developed hardware which is capable of controlling IoT devices and different other devices where you need manual interaction. And this we do via intaQt.
intaQt is connected to an environment where these devices are connected. And everything that a human being can do with these devices, we can do by controlling them with intaQt.
We write step definitions. We have integrated Cucumber. And with Cucumber with the Gherkin syntax you write the test cases, and step by step you can control these devices and do actions. You can integrate it later with the trace information, you can collect evidence, screenshots and these kinds of things. And everything will be copied into a directory where the test case evidence, where the reports, where the screenshots are stored.
sQedule is another development by us. sQedule is an intelligent scheduling or smart scheduling environment where you have multiple intaQt instances. So this means you want to execute a lot of test cases in a very short time and you have limited resources in your lab. Sometimes a test case requires different resources. And sQedule checks the availability of the resources and selects the test case according to your resource infrastructure. And when some resources are free and there are some test cases which can be run with these resources, then sQedule does an intelligent rescheduling of the test cases in order to get through as many test cases as possible in a very short time.
intaQt studio is an IntelliJ-based user interface in order to write test cases which are executed in intaQt. So we can say it’s a client-server architecture: intaQt is the server, intaQt studio is the client, the test cases are written there. So we have also fancy features in intaQt studio like auto completion, colour coding and this kind of thing. And we have also integrated GitLab, so all the test cases you write are under version controlled and secured too.
So with intaQt studio you write your test cases and you have also graphical user interface where you can see these devices in reality which are somewhere on another location. You can see them in your intaQt studio. This means we stream the user interface of the phone for instance into intaQt studio. You can see what’s happening on these phones and you can interact with them. By mouse clicking and so on you can control the device which is for instance somewhere else in the world.
After we have developed these parts, we are capable now of executing hundreds of thousands of test cases in a very short time. But you know, now you need to do some reporting. You have your JIRA environment and or CI/CD environment or you have an application life cycle monitor or something like this. And you have thousands of test cases which will be executed with thousands of evidences, reports, locks, traces and so on. And you need to bring it to here. Again, you will require some automation functionality and this we do with conQlude.
conQlude is our reporting environment where all the information which is collected here by intaQt is copied in a smart way. And from here, using the automatic APIs which are available in JIRA, for instance, or in CI/CD environments, we push this information into this environment automatically. With one click you can copy hundreds of thousands of test cases of the persons who have executed it, of the evidence which has been collected, you can push it into that environment. And this is also one important step in the automation activities.
Then we have some information which has to be collected from other network functions, from other IT systems, from other directories and databases. In order to do this in real time, we have developed also a small service which is called colleQtor. It does real-time processing and collects this information which is needed for some verification actions. We have automated things but now we have some evidence, and this evidence needs also to be verified. And before verification, you need to collect it. Therefore, we have developed the colleQtor.
We come from the telecommunications business and in telecommunications the most important part is the CDR verification. Everything related to charging is the most important thing because this brings money. Therefore, we have also developed CDR-linQ which is a standalone module. CDRs are sometimes written much later than the test case is executed. For instance, in some lab environments you get the CDRs once a day at midnight. And with CDR-linQ you are copying them and you are providing, you are doing of course some analysis of the data, reformatting of the data. CDR-linQ offers the data in a way that can be used for verification purposes.
This means that the test cases are executed using these devices, in the network we have the CDRs which are written, and the test cases executed, the test cases finished, and now we wait to verify the CDRs. So in order to free the resources, in order to make these independent from each other, we have created meta steps. And in our reporting environments, test cases where we need CDRs and they are not available yet, we mark them. And the test cases are set to pending. Once the CDRs are copied into our CDR-linQ – let’s say a few hours later – then a trigger indicates that CDRs are available and conQlude starts, gives the trigger for final CDR verification. And once this is done then the test case is set to successful or failed, depending on the CDR verification. This is also an important part of being able to verify things in a lab environment.
Now, as I mentioned we do telecommunications and IoT. In the lab environment sometimes you have your own signalling environment. You have your own 2G, 3G, 4G, 5G, Wi-Fi signals and you need to put your phones into a shielded box. Once the phones are in the shielded box, there are attenuators where you say, “Ok, I need a 3G signal in the box!” In order to make it available to these phones, you need to control these attenuators. We have integrated also this, so we are capable of controlling attenuators from MTS Systemtechnik, from JFW from the United States, Highthon used mainly in Europe. We have integrated them and we can control automatically the signal field into these shielded boxes. This is one point.
Another point is the integration of a SIM array. SIM arrays are boxes where the SIMs are physically located in this SIM array and via internet or intranet, the SIM information is to be copied to an electronic device and from here you have a flex cable which goes into the phone and the phone thinks that there is a SIM inside but in reality, this SIM is in the SIM array.
And in order to provide this functionality to the intaQt service, we have created a service which is called reloQate. So, intaQt is integrated with reloQate and with this we can seamlessly manipulate the SIMs which are required for test cases. The tester writes his test case without having any information about the setup and if a SIM card is required which is in the SIM array and is not provisioned for these phones, it is done via reloQate automatically. Then the SIM is copied to the device, the device is rebooted and then it is available for the next test case.
This is also something very important: We have really zero touch. With zero touch we mean that we do not touch this hardware, regardless of what it is. If we need to press a power button of this CPE for instance, we create 3D housings with a servo motor and we can trigger really this activity. If we need to identify the lights of this CPE, if they are blinking red, green or something, we put RGB sensors on top of it and identify the status of the devices and so on. It is really a very fancy environment and we can deal with any kind of hardware.
Now due to the security requirements, we have been asked if it is possible to have an user interface which is not based on IntelliJ, where you do not need to install it on your notebook. We have therefore developed intaQt web-ui, which is a web-service-based user interface where nearly all functionality which is provided in intaQt studio is also provided in the web-ui. Especially the parts we need for our testing.
Then, as a next step we have implemented three different verification features which are also fully automated. The first one is trace compare. Once we are running our test cases we collect the traces, for instance Wireshark traces or some snoops for network functions. And we put them into the directory where we handle these traces. The traces are decoded and then we have a rule engine where we can apply trace rules. So we can verify the headers of the action, we can verify the logic of the trace, the order of the messages which are coming in and we can combine packets and compare them with different other packets which already have been run and where we have decided that they are ok. This is trace comparison.
The same we do with CDR verification. So the CDRs which have been collected are matched against some rule sets and we do the CDR verification. And in order to do protocol verification we have also a logical and functional verification of the protocols used in that environment. So, there are 3GPP specifications which we use. We generate a rule set where we verify the test cases. For instance, a standard, let’s say SIP-based use case, where A-party is talking with B-party. If we do the protocol verification for that we generate approximately 70,000 rules and verify the 3GPP documentation for that.
Of course, when you do test cases there is also some provisioning function required. For this we have subsQriber-db. This is an asset management and network function provisioning environment. Why asset management? In some test cases you also use one-time-data like a voucher or some access criteria which is required for the test cases. This is stored in our asset management database. Once it is used it is invalidated and it generates another code. The test case does not need to be touched. In order to create, let’s say, groupings of subscribers or similar things then you can use the subsQriber-db.
Finally, we have our CI/CD infrastructure. intaQt offers you a command-line interface from where you can trigger intaQt from another environment like, let’s say, from Jenkins. We offer CI/CD and we provide also documentation and results which can be used by this CI/CD infrastructure.
Sometimes, when you have a network and you have a lab, the lab is always on the move. This means sometimes it does not work properly but you need to finalise your project, and some device, some load, some IT systems are not available. In these cases we have mimiQ which is our simulator. With mimiQ you can then simulate the missing load and the network does not recognise that the load is missing. It generates a request and mimiQ is configured in such a way that it answers these requests and the network works as expected. And mimiQ has a second functionality. It can generate a lot of requests in a very short time. So you can use it also as a load and stress test infrastructure which generates hundreds of millions of requests in a very short time.
This is the framework we have developed. From the very beginning where you execute test cases, to collecting data, to analysing data and verifying it also in an automatic need by adding some, let’s say, simulator-related functionality. It gives you everything that is needed in a modern environment to run your test cases automatically.