Testing Blog

The Inquiry Method for Test Planning

Monday, June 06, 2016
Share on Twitter Share on Facebook
Google
Labels: Anthony Vallone

23 comments :

  1. Black SwanJune 6, 2016 at 9:46:00 PM PDT

    Thanks for the post. Very informative

    ReplyDelete
    Replies
      Reply
  2. SureshJune 6, 2016 at 10:26:00 PM PDT

    Very much useful.. thanks

    ReplyDelete
    Replies
      Reply
  3. UnknownJune 6, 2016 at 10:45:00 PM PDT

    Hi Anthony,

    Thank you for this comprehensive article around test planning and considerations for writing test plans.

    We should also consider the source document based on which the test plans are constructed. The source documents are usually functional specification documents. I have seen test teams lay more emphasis on the test plan templates and organization and tend to ignore the content within the source document and the organization of the source document.

    The source document is also referred to by the development teams and it is important for any test team to understand this document thoroughly before embarking on creating any test artifact (test plans, test approach or strategy documents).

    On some projects and programs, the testing could rely more on the source document and less on test plans. The test plans tend to get heavy release on release especially when doing agile and some test cases lose their significance and relevance quickly.

    We should consider the study of the source document and based on its organization decide the need and content of the test plans.

    Apologies for the long comment. Hope this is relevant to the topic above.

    Thank You.

    Deepak K

    ReplyDelete
    Replies
      Reply
  4. UnknownJune 7, 2016 at 9:10:00 PM PDT

    Excellent article. Very informative. Thanks for the post.
    -Sethu

    ReplyDelete
    Replies
      Reply
  5. Z.BorrelliJune 8, 2016 at 2:34:00 AM PDT

    Hi there, Antony. I posted on Twitter yesterday with a few questions as '@zacoid55' and was told to carry on the discussion here.

    My main question is around the phrase "Many projects can automate all testing." Is this every bit of testing you've come up with i.e. 100% of the tests you've posited or is that actually you saying you've automated absolutely everything?

    Thank you in advance

    ReplyDelete
    Replies
    1. Anthony ValloneJune 8, 2016 at 8:54:00 AM PDT

      Hi,

      Most projects at Google have no manual testing and rely entirely on automated tests. This is particularly true for back-end/core/infrastructure projects. If I understand you correctly, you are asking if we have automated the tests we feel are necessary, or if we have literally automated every possible scenario and permutation of inputs/state. It is usually the former, because cost normally prohibits you from testing absolutely every possibility. However, on April 1st, we can manage this:
      http://googletesting.blogspot.com/2015/04/quantum-quality.html

      -Anthony

      Delete
      Replies
        Reply
    2. Steve C:\>June 14, 2016 at 12:58:00 AM PDT

      How about projects which are involved UI and use case/user flows? Are these 100% automated at Google too? How are end-to-end tests carried out?

      Delete
      Replies
        Reply
    3. Anthony ValloneJune 14, 2016 at 8:58:00 AM PDT

      It varies from team to team. There are many teams with complicated UIs that rely entirely on automation. Some teams take a hybrid approach where most testing is automated but some complex scenarios are manual. When taking the hybrid approach, it helps to design the system such that most project code is easy to automate in isolation and is loosely coupled with the components that are hard to automate.

      We have internal systems, similar to continuous integration systems, that are dedicated to running end-to-end tests. These systems continuously build binaries, deploy the SUT, execute large tests against the SUT, monitor results, report on status changes, etc.

      Delete
      Replies
        Reply
    4. Reply
  6. VISHALJune 13, 2016 at 6:49:00 AM PDT

    "Injury to people or animals", never thought !! But How ?

    ReplyDelete
    Replies
    1. Anthony ValloneJune 14, 2016 at 9:03:00 AM PDT

      Consider software used by vehicles (land, water, air, or space), medical devices, heavy machinery, climate control, chemical factories, utility stations, etc.

      Delete
      Replies
        Reply
    2. Reply
  7. ArdJune 15, 2016 at 11:07:00 PM PDT

    Hoe do you deal with receiving interesting feedback from your automated tests and this feedback needs to be explored, write another automated test case or is it then chapier to do it exploitative?

    ReplyDelete
    Replies
    1. Anthony ValloneJune 16, 2016 at 8:28:00 AM PDT

      Sorry, I don't understand. Can you define/clarify "interesting feedback" and "do it exploitative"?

      Delete
      Replies
        Reply
    2. UnknownJune 20, 2016 at 8:51:00 AM PDT

      I think what Ard is saying was, How do you handle the result you received from an automated test case - is it more cost-effective to then write another automated test from that result, or just do explorative testing on the result?

      Delete
      Replies
        Reply
    3. Anthony ValloneJune 20, 2016 at 9:16:00 AM PDT

      The result is simply pass or fail (along with logging and other artifacts). If the test fails, we need to determine the root cause. In many cases, root cause can be determined from logging alone (see http://googletesting.blogspot.com/2013/06/optimal-logging.html). In other cases, we need to reproduce the issue and debug. Since this is a one-time-effort, we may debug via ad hoc automation or manual experiments - whichever is easier. Once the root cause is determined, the test may be fixed, the SUT may be fixed, and/or new automated tests may be created to cover the scenario.

      Delete
      Replies
        Reply
    4. Reply
  8. UnknownJuly 5, 2016 at 5:28:00 AM PDT

    Great piece of content Anthony. I am mostly curious about the appliance of the above on a mobile project where you deal with constant changes in the market, app features variance on different devices, coverage challenges etc. What is your take from a test planning best practices when dealing with a cross platform mobile app?
    I have written some thoughts about it on my personal blog (mobiletestingblog.com) but looking to get your experienced POV.

    Again, great blog.

    Regards
    Eran

    ReplyDelete
    Replies
    1. Anthony ValloneJuly 9, 2016 at 8:45:00 AM PDT

      Hi Eran,

      You should identify the supported platforms in the plan and categorize the feature variance in some way. There are many good approaches, but a simple one is to setup a grid with platform rows and feature columns. Platform may be a combination of OS version and device model. Each cell can contain unique information and perhaps status about that particular platform/feature combination testing. If the feature list is very long, create multiple grids, where each grid is a general feature category.

      Delete
      Replies
        Reply
    2. Anthony ValloneJuly 13, 2016 at 8:22:00 AM PDT

      Also, thanks for asking that question. It made me realize that an important question was missing. The post has been updated to include "What platforms are supported?".

      Delete
      Replies
        Reply
    3. Shane WatsonJuly 27, 2016 at 3:06:00 AM PDT

      Thank You Anthony Vallone, for sharing this great piece of words. Test planning is always an important factor, help testing executives to implement effective testing techniques so as to remove bugs and vulnerabilities attached to the software under testing.

      Delete
      Replies
        Reply
    4. Reply
  9. UnknownAugust 4, 2016 at 5:54:00 AM PDT

    Thank You Anthony. Very informative and helpful.

    ReplyDelete
    Replies
      Reply
  10. Ronen YurikSeptember 4, 2016 at 9:58:00 AM PDT

    Hi All,

    quick question, where do you manage your test plan / cases ? mean you can do it in test management tool, but it's kinda waste of time no ? mean, my approach is, if I got test automation engineers and we are using cucumber + selenium for UI, why not writing the scenarios in the code (feature.file), could you share how test plan / test cases are managed at google ? what do you find more efficient ?

    Thanks
    Ronen

    ReplyDelete
    Replies
    1. Anthony ValloneSeptember 5, 2016 at 9:31:00 AM PDT

      For test plans, most teams use Google Docs. Most of our test cases are automated, so the code repository and test comments serve as case management. For manual cases, we use an internal test case management tool.

      Delete
      Replies
        Reply
    2. Reply
  11. Sherlin JonesMay 24, 2017 at 5:03:00 AM PDT

    Thanks for the update

    ReplyDelete
    Replies
      Reply
  12. kassaJuly 17, 2020 at 5:44:00 AM PDT

    Hi Anthony, Is this something that is still being carried out in Google? Curious for something new!

    ReplyDelete
    Replies
      Reply
Add comment
Load more...

The comments you read and contribute here belong only to the person who posted them. We reserve the right to remove off-topic comments.

  

Labels


  • TotT 104
  • GTAC 61
  • James Whittaker 42
  • Misko Hevery 32
  • Code Health 31
  • Anthony Vallone 27
  • Patrick Copeland 23
  • Jobs 18
  • Andrew Trenk 13
  • C++ 11
  • Patrik Höglund 8
  • JavaScript 7
  • Allen Hutchison 6
  • George Pirocanac 6
  • Zhanyong Wan 6
  • Harry Robinson 5
  • Java 5
  • Julian Harty 5
  • Adam Bender 4
  • Alberto Savoia 4
  • Ben Yu 4
  • Erik Kuefler 4
  • Philip Zembrod 4
  • Shyam Seshadri 4
  • Chrome 3
  • Dillon Bly 3
  • John Thomas 3
  • Lesley Katzen 3
  • Marc Kaplan 3
  • Markus Clermont 3
  • Max Kanat-Alexander 3
  • Sonal Shah 3
  • APIs 2
  • Abhishek Arya 2
  • Alan Myrvold 2
  • Alek Icev 2
  • Android 2
  • April Fools 2
  • Chaitali Narla 2
  • Chris Lewis 2
  • Chrome OS 2
  • Diego Salas 2
  • Dori Reuveni 2
  • Jason Arbon 2
  • Jochen Wuttke 2
  • Kostya Serebryany 2
  • Marc Eaddy 2
  • Marko Ivanković 2
  • Mobile 2
  • Oliver Chang 2
  • Simon Stewart 2
  • Stefan Kennedy 2
  • Test Flakiness 2
  • Titus Winters 2
  • Tony Voellm 2
  • WebRTC 2
  • Yiming Sun 2
  • Yvette Nameth 2
  • Zuri Kemp 2
  • Aaron Jacobs 1
  • Adam Porter 1
  • Adam Raider 1
  • Adel Saoud 1
  • Alan Faulkner 1
  • Alex Eagle 1
  • Amy Fu 1
  • Anantha Keesara 1
  • Antoine Picard 1
  • App Engine 1
  • Ari Shamash 1
  • Arif Sukoco 1
  • Benjamin Pick 1
  • Bob Nystrom 1
  • Bruce Leban 1
  • Carlos Arguelles 1
  • Carlos Israel Ortiz García 1
  • Cathal Weakliam 1
  • Christopher Semturs 1
  • Clay Murphy 1
  • Dagang Wei 1
  • Dan Maksimovich 1
  • Dan Shi 1
  • Dan Willemsen 1
  • Dave Chen 1
  • Dave Gladfelter 1
  • David Bendory 1
  • David Mandelberg 1
  • Derek Snyder 1
  • Diego Cavalcanti 1
  • Dmitry Vyukov 1
  • Eduardo Bravo Ortiz 1
  • Ekaterina Kamenskaya 1
  • Elliott Karpilovsky 1
  • Elliotte Rusty Harold 1
  • Espresso 1
  • Felipe Sodré 1
  • Francois Aube 1
  • Gene Volovich 1
  • Google+ 1
  • Goran Petrovic 1
  • Goranka Bjedov 1
  • Hank Duan 1
  • Havard Rast Blok 1
  • Hongfei Ding 1
  • Jason Elbaum 1
  • Jason Huggins 1
  • Jay Han 1
  • Jeff Hoy 1
  • Jeff Listfield 1
  • Jessica Tomechak 1
  • Jim Reardon 1
  • Joe Allan Muharsky 1
  • Joel Hynoski 1
  • John Micco 1
  • John Penix 1
  • Jonathan Rockway 1
  • Jonathan Velasquez 1
  • Josh Armour 1
  • Julie Ralph 1
  • Kai Kent 1
  • Kanu Tewary 1
  • Karin Lundberg 1
  • Kaue Silveira 1
  • Kevin Bourrillion 1
  • Kevin Graney 1
  • Kirkland 1
  • Kurt Alfred Kluever 1
  • Manjusha Parvathaneni 1
  • Marek Kiszkis 1
  • Marius Latinis 1
  • Mark Ivey 1
  • Mark Manley 1
  • Mark Striebeck 1
  • Matt Lowrie 1
  • Meredith Whittaker 1
  • Michael Bachman 1
  • Michael Klepikov 1
  • Mike Aizatsky 1
  • Mike Wacker 1
  • Mona El Mahdy 1
  • Noel Yap 1
  • Palak Bansal 1
  • Patricia Legaspi 1
  • Per Jacobsson 1
  • Peter Arrenbrecht 1
  • Peter Spragins 1
  • Phil Norman 1
  • Phil Rollet 1
  • Pooja Gupta 1
  • Project Showcase 1
  • Radoslav Vasilev 1
  • Rajat Dewan 1
  • Rajat Jain 1
  • Rich Martin 1
  • Richard Bustamante 1
  • Roshan Sembacuttiaratchy 1
  • Ruslan Khamitov 1
  • Sam Lee 1
  • Sean Jordan 1
  • Sebastian Dörner 1
  • Sharon Zhou 1
  • Shiva Garg 1
  • Siddartha Janga 1
  • Simran Basi 1
  • Stan Chan 1
  • Stephen Ng 1
  • Tejas Shah 1
  • Test Analytics 1
  • Test Engineer 1
  • Tim Lyakhovetskiy 1
  • Tom O'Neill 1
  • Vojta Jína 1
  • automation 1
  • dead code 1
  • iOS 1
  • mutation testing 1


Archive


  • ►  2025 (1)
    • ►  Jan (1)
  • ►  2024 (13)
    • ►  Dec (1)
    • ►  Oct (1)
    • ►  Sep (1)
    • ►  Aug (1)
    • ►  Jul (1)
    • ►  May (3)
    • ►  Apr (3)
    • ►  Mar (1)
    • ►  Feb (1)
  • ►  2023 (14)
    • ►  Dec (2)
    • ►  Nov (2)
    • ►  Oct (5)
    • ►  Sep (3)
    • ►  Aug (1)
    • ►  Apr (1)
  • ►  2022 (2)
    • ►  Feb (2)
  • ►  2021 (3)
    • ►  Jun (1)
    • ►  Apr (1)
    • ►  Mar (1)
  • ►  2020 (8)
    • ►  Dec (2)
    • ►  Nov (1)
    • ►  Oct (1)
    • ►  Aug (2)
    • ►  Jul (1)
    • ►  May (1)
  • ►  2019 (4)
    • ►  Dec (1)
    • ►  Nov (1)
    • ►  Jul (1)
    • ►  Jan (1)
  • ►  2018 (7)
    • ►  Nov (1)
    • ►  Sep (1)
    • ►  Jul (1)
    • ►  Jun (2)
    • ►  May (1)
    • ►  Feb (1)
  • ►  2017 (17)
    • ►  Dec (1)
    • ►  Nov (1)
    • ►  Oct (1)
    • ►  Sep (1)
    • ►  Aug (1)
    • ►  Jul (2)
    • ►  Jun (2)
    • ►  May (3)
    • ►  Apr (2)
    • ►  Feb (1)
    • ►  Jan (2)
  • ▼  2016 (15)
    • ►  Dec (1)
    • ►  Nov (2)
    • ►  Oct (1)
    • ►  Sep (2)
    • ►  Aug (1)
    • ▼  Jun (2)
      • The Inquiry Method for Test Planning
      • GTAC 2016 Registration Deadline Extended
    • ►  May (3)
    • ►  Apr (1)
    • ►  Mar (1)
    • ►  Feb (1)
  • ►  2015 (14)
    • ►  Dec (1)
    • ►  Nov (1)
    • ►  Oct (2)
    • ►  Aug (1)
    • ►  Jun (1)
    • ►  May (2)
    • ►  Apr (2)
    • ►  Mar (1)
    • ►  Feb (1)
    • ►  Jan (2)
  • ►  2014 (24)
    • ►  Dec (2)
    • ►  Nov (1)
    • ►  Oct (2)
    • ►  Sep (2)
    • ►  Aug (2)
    • ►  Jul (3)
    • ►  Jun (3)
    • ►  May (2)
    • ►  Apr (2)
    • ►  Mar (2)
    • ►  Feb (1)
    • ►  Jan (2)
  • ►  2013 (16)
    • ►  Dec (1)
    • ►  Nov (1)
    • ►  Oct (1)
    • ►  Aug (2)
    • ►  Jul (1)
    • ►  Jun (2)
    • ►  May (2)
    • ►  Apr (2)
    • ►  Mar (2)
    • ►  Jan (2)
  • ►  2012 (11)
    • ►  Dec (1)
    • ►  Nov (2)
    • ►  Oct (3)
    • ►  Sep (1)
    • ►  Aug (4)
  • ►  2011 (39)
    • ►  Nov (2)
    • ►  Oct (5)
    • ►  Sep (2)
    • ►  Aug (4)
    • ►  Jul (2)
    • ►  Jun (5)
    • ►  May (4)
    • ►  Apr (3)
    • ►  Mar (4)
    • ►  Feb (5)
    • ►  Jan (3)
  • ►  2010 (37)
    • ►  Dec (3)
    • ►  Nov (3)
    • ►  Oct (4)
    • ►  Sep (8)
    • ►  Aug (3)
    • ►  Jul (3)
    • ►  Jun (2)
    • ►  May (2)
    • ►  Apr (3)
    • ►  Mar (3)
    • ►  Feb (2)
    • ►  Jan (1)
  • ►  2009 (54)
    • ►  Dec (3)
    • ►  Nov (2)
    • ►  Oct (3)
    • ►  Sep (5)
    • ►  Aug (4)
    • ►  Jul (15)
    • ►  Jun (8)
    • ►  May (3)
    • ►  Apr (2)
    • ►  Feb (5)
    • ►  Jan (4)
  • ►  2008 (75)
    • ►  Dec (6)
    • ►  Nov (8)
    • ►  Oct (9)
    • ►  Sep (8)
    • ►  Aug (9)
    • ►  Jul (9)
    • ►  Jun (6)
    • ►  May (6)
    • ►  Apr (4)
    • ►  Mar (4)
    • ►  Feb (4)
    • ►  Jan (2)
  • ►  2007 (41)
    • ►  Oct (6)
    • ►  Sep (5)
    • ►  Aug (3)
    • ►  Jul (2)
    • ►  Jun (2)
    • ►  May (2)
    • ►  Apr (7)
    • ►  Mar (5)
    • ►  Feb (5)
    • ►  Jan (4)

Feed

  • Google
  • Privacy
  • Terms
OSZAR »