#30DaysOfTesting – ECommerce Task II & IV

Previous in this series: Task I & III

Task 2: Read and share an interesting blog about ecommerce testing

Task 4: Find and share a useful video on youtube about ecommerce testing

One testing website I like and read since I started to test software is Software Testing Help. Vijay has done amazing job by collecting all kind of testing ideas and helping so many rookie testers. 8 Important Segments Of Testing eCommerce Websites is very good place where to start if you are starting to test retail software.

For advanced testing or as Daniel says at the end – to put a smile on your face – do some penetration testing.

What Is Quality Assurance?

Note: This article originally was published by trendig in English and Deutsch.

Are you in charge of software projects at your company and facing problems with your software? Do you want to improve the quality and have no idea where to start? Then this article will help you to start implementing QA processes!

qa

QA stands for Quality Assurance and is a very important subject in the complex world of software development. We observe our software and find that it has discrepancies with customer expectations. A discrepancy in software is called a bug. If we remove the bug from the software, it does not automatically mean that it now works flawlessly. A requirement is the term for customer expectations for a system and it could be that the bug hides there as well. Sounds difficult, right? That is why we need dedicated QA engineers!

Assurance

Let’s start with the assurance part in QA. Spoiler: Nobody in the industry can assure that your software is flawless. Every software application has at least one bug, the question is: Do we know what it is, where it is and do we want that others (end-users, competitors, hackers) find it? Now that we accept that, we can move on. How can we ensure that we know about all the relevant bugs in our software? We test the software! We can also test all kinds of documents, design and code. Testers have a comprehensive toolbox on how to collect information about the system in test.

After some time of testing you will notice that there is a pattern of some kind (testers are really good in discovering hidden patterns) on how bugs appear or reappear. When you discover that pattern, one thing you might want to do is to reduce the number of known bugs or bug types from your product in the future. We can divide this process into two steps:

The first step is to analyze bugs in a software with the aim of finding the source. Why and how did a defect find a way into the software? Possible questions to ask: Is our system architecture too complex? Bad (or no) technical documentation? Vague or contradictory requirements? Do our people have the necessary skills to design/implement/test the software?

Second step: To define a set of actions and procedures to avoid appearance of the same bugs in the future. A typical approach might be: define a set of guidelines and standards to prevent this type of failure in the future, improve the quality of requirements by creating a checklist of what good requirements should cover, or use a specific tool to help us.

To summarize, with the help of analytical techniques we collect information about the software, with the help of constructive software quality assurance methods we prevent reappearance of bugs from a known source.

Quality Assurance requires processes and structure in order to analyze and improve software quality.

quality

Now we need to talk about the quality part of QA. To better understand the topic let’s use the analogy of a chair. If we need new chairs for our dining room, we go to the store, see 20-30 different types of chairs and feel lost. How to find the right model? Which chair can we describe as having good quality? Different people mention different things. For me, it is important that the chairs fit with the rest of the interior decor, and that they are stable and easy to clean (because of my kids). For my son, it is important that he can swing on it without the fear of falling backwards. The best way to address these needs, is to create a list of requirements. Do we need armrests? Should they be stackable? What kind of material/style/colour?

We do the same with software. We create a list of requirements and measure the degree of compliance. The biggest problem is that software development is a very complex process and many customers lack an understanding of it. Customer requirements influence up to 50 % of project success. That is why we talk about quality attributes. One way to classify quality attributes and metrics is the standard ISO/IEC 25010:2011.

Like testing, quality aspects can be applied to all kind of sources: software, subsystems, documents, single requirements, design and code. When starting a project, consider which quality aspects you will cover and what you will use as measurement. Choose it wisely, because what you measure, you will improve.

To illustrate the idea of quality, I used an example of finding a set of chairs, but software is a very complex system, more like a house. Quite often in development teams we hear customers saying: “We do not have the final set of requirements, but you can start and implement what we have so far.” This is wrong for many reasons, but I will mention just two of them. Firstly, a requirement baseline is part of the contract between customer and service provider and should be synchronised with a SLA (service level agreement). Secondly, imagine building a house and, when your constructors are halfway done, you have the idea to put a swimming pool in the basement. Will it work? No. It is the same with software. If you cannot read the source code and see software architecture yourself, it does not mean that it does not exist. Your software has a structure and developing it requires you to follow certain rules.

Conclusion

Software development is a complex process which involves specialists from different domains. Quality Assurance is a discipline which shadows SD starting with the gathering of requirements and ending with testing rollback scenarios, with the aim of finding errors as soon as possible, analyze them and improve the process to reduce or eliminate those types of bugs.

Find and fix errors as soon as possible to keep costs as low as possible.

Why do we do all this? Because prevention and reduction of errors as early as possible cost less than the rework involved in fixing the bug. In this case we can talk about build-in quality. It is a smart business decision to invest in effective software quality assurance.

If you want to read more about QA processes, we suggest you blogs from Janet Gregory and Anne-Marie Charrett.

Gmail And Dots

This week I was on the phone with my insurer. It was Saturday and I had to say my name, my address, my birthday and my bank account to identify me as me. I asked insurer in future to contact me via e-mail because during workday I mostly cannot answer the phone, because I work as a trainer with full class of students.

  • she: please spell your email address
  • me: kristine dot <rest of my gmail address>
  • she: your phone number
  • me: we are on the phone right now, you know my phone number
  • she: sorry, I have to register this kind of information. this is for Saturday calls only.
  • me: ….. +49 <numbers>
  • she: do you have a dot in your email address? is there any capitals?
  • me: (i spelled it and you did not listen) it is gmail address, it does not matter. And email addresses are not case sensitive.
  • she: no, you are wrong, it matters!

I almost forgot about this conversation, but today got reminded by Netflix scam story. I wrote a year ago how handy it is that Gmail finally decided to ignore the dot and sell it as a feature. I was thinking as a tester, not as a user. But James is right, Google never informed me that I have infinite set of email addresses. If users does not know, and services, who collect my email address does not know it, then we are back to: it’s a bug not a feature! again.

As a tester I will keep using Gmail dot ignorance feature, but as a user I will pay more attention and write a mental note to myself about possible misuse.

 

 

MORE AGILE TESTING > INVESTIGATIVE TESTING

This is digitalised collection of testing resources created and published by Lisa Crispin and Janet Gregory in their book More Agile Testing: Learning Journeys for the Whole Team“. For more details on their work, visit http://agiletester.ca.

Already digitalised and checked parts: Introduction, Learning For Better Testing, Planning, Testing Business Value. Comming soon: Test Automation, What Is Your Context?, Agile Testing in Practice.

Part V: Investigative Testing

Books

Articles, Blog Posts, Slide Decks, and Websites

More Agile Testing > Testing Business Value

This is digitalised collection of testing resources created and published by Lisa Crispin and Janet Gregory in their book More Agile Testing: Learning Journeys for the Whole Team“. For more details on their work, visit http://agiletester.ca.

Already digitalised and checked parts: Introduction, Learning For Better Testing, Planning. Comming soon: Investigative Testing, Test Automation, What Is Your Context?, Agile Testing in Practice.

Part IV: Testing Business Value

Books

Articles, Blog Posts, Slide Decks, and Websites

More Agile Testing > Planning

This is digitalised collection of testing resources created and published by Lisa Crispin and Janet Gregory in their book More Agile Testing: Learning Journeys for the Whole Team“. For more details on their work, visit http://agiletester.ca.

Already digitalised and checked parts: Introduction, Learning For Better Testing. Comming soon: Testing Business Value, Investigative Testing, Test Automation, What Is Your Context?, Agile Testing in Practice.

Part III: Planning—So You Don’t Forget the Big Picture

Books

Freeman, Steve, and Nat Pryce, Growing Object-Oriented Software, Guided by Tests, Addison-Wesley, 2009.
Galen, Robert, Software Endgames: Eliminating Defects, Controlling Change, and the Countdown to On-Time Delivery, Dorset House, 2005.
Gottesdiener, Ellen, and Mary Gorman, Discover to Deliver: Agile Product Planning and Analysis, 2012.
Hendrickson, Elisabeth, Explore It! Reduce Risk and Increase Confidence with Exploratory Testing, Pragmatic Programmer, 2013.
Hüttermann, Michael, Agile ALM: Lightweight Tools and Agile Strategies, Manning Publications, 2011.
Whittaker, James A., Jason Arbon, and Jeff Carollo, How Google Tests Software, Addison- Wesley, 2012.

Articles, Blog Posts, Slide Decks

More Agile Testing > Learning For Better Testing

This is digitalised collection of testing resources created and published by Lisa Crispin and Janet Gregory in their book More Agile Testing: Learning Journeys for the Whole Team“. For more details on their work, visit http://agiletester.ca.

Already digitalised and checked parts: Introduction. Comming soon: Planning—So You Don’t Forget the Big Picture, Testing Business Value, Investigative Testing, Test Automation, What Is Your Context?, Agile Testing in Practice.

Part II: Learning for Better Testing

Books

Blog Posts and Online Articles

Courses, Conferences, Online Communities, Podcasts

More Agile Testing > Introduction

Two weeks ago on Slack, we talked about collections of good resources and Lisa wrote that she and Janet created a good one, but it is not available online. I volunteered to digitalise it and she agreed. Since then I am checking links and reading articles. What can I say – it is an AMAZING collection! Thank you, Lisa, for your kind allowance to publish the list online.

This is the bibliography list created and published by Lisa Crispin and Janet Gregory, More Agile Testing: Learning Journeys for the Whole TeamFor more details on their work, visit http://agiletester.ca.

Part I: Introduction

Books

Websites, Blogs, Articles, Slide Decks

 

State Of Testing 2018

This is a time of year when “PractiTest” and “TeaTime with Testers” invites all the testers to fill a survey to find out who we are, where we come from and on what kind of subjects we are working on. This is 5th year in a row and I am already looking forward to the picture we will get this time.

Big thank goes to contributors in the review team: Jerry Weinberg, Derk-Jan de Grood, Maria Kedemo, Helena Jeret, Alan Page, Brent Jensen, Eran Kinsbruner, Bas Dijkstra, Erik Proegler, Kristel Krustuuk, Gerie Owen, and Nermin Caluk. If you have any suggestions how to improve it, you are invited to join the review team.

The organisers estimate that the survey will take only 10minutes of your time. For me, it took more time. Why? In this year’s survey, there are few questions where you have to think and/or recall what did you learn last year. Hier are few questions for you to get a feeling:

Have you attended any conferences or training sessions in the past 3 years?
Have you started using any new tools to support your testing (Exploratory testing or in general) in the last year?
Have you made any important changes in the way you are testing during the last year?  

 

Those are few good questions, right? Be part of it! You can find the survey here.