1. Define Manual Testing?

Software testing is a validation process that makes sure a system works as per the business requirements. It evaluates and qualifies a system on various aspects such as accuracy, completeness, usability, efficiency, and more.

We need software testing for the following reasons-

1. Testing provides an assurance to the stakeholders that the product works as intended.

2. Avoidable defects leaked to the end-user/customer without proper testing adds a bad reputation to the development company.

3. Defects detected earlier phase of SDLC results in lesser cost and resource utilization of correction.

4. Saves development time by detecting issues in an earlier phase of development.

5. The testing team adds another dimension to the software development by providing a different viewpoint to the product development process.

Static testing is a kind of testing for reviewing the work products or documentation that are being created throughout the entire project. It allows reviewing the specifications, business requirements, documentation, processes and functional requirements in the initial phase of testing. So that the testers involved in it can understand the requirements in more detail before starting the testing lifecycle which intends to help in delivering the quality product.

Testing performed by executing or running the application under test either manually or using automation.

It is a document that details the tests conducted during the entire SDLC, the analysis of the bugs and errors found and corrected, the density of defects, etc. It is a memo that indicates the formal completion of the testing procedure.

Well, the most straightforward answer would be that when all the defects are not found in software, we can stop testing. However, it is not possible to have perfect software free of bugs. We can determine the exit criteria for testing based on the deadlines, budget, and the extent of testing performed. Usually, testers can find most of the major and critical bugs during the first and second weeks of testing. After the third and fourth weeks, even minor and cosmetic defects are taken care of, and the application moves into the regression testing phase. Once regression is completed, we can be assured that 99% of test scenarios have been covered, and software is ready to be rolled out.

The software testing life cycle refers to all the activities performed during testing of a software product. The phases include-

Requirement analysis and validation – In this phase, the requirements documents are analyzed and validated and the scope of testing is defined.

Test planning – In this phase test plan strategy is defined, estimation of test effort is defined along with automation strategy and tool selection is done.

Test Design and Analysis – In this phase test cases are designed, test data is prepared and automation scripts are implemented

Test environment setup – A test environment closely simulating the real-world environment is prepared.

Test execution – The test cases are prepared, bugs are reported and retested once resolved.

Test closure and reporting – A test closure report is prepared to have the final test results summary, learning, and test metrics.

Some of the most widely used Defect Management tools are – Jira, Bugzilla, Redmine, Mantis, Quality Center, etc.

Below are the examples for different combinations of priority and severity- 

Low priority-Low severity – A spelling mistake in a page not frequently navigated by users.

Low priority-High severity – Application crashing in some very corner case.

High priority-Low severity – Slight change in logo color or spelling mistake in the company name.

High priority-High severity – Issue with login functionality.

WebDriver is defined as one interface is Selenium which can be used to perform below steps:

  •  To automate webbased applications
  •  Webdriver supports multiple programming language like java, c#,python,php,Perl,ruby.
  •  Webdriver supports for multiple programming language

Actions class is an ability provided by Selenium for handling keyboard and mouse events. In Selenium WebDriver, handling these events includes operations such as drag and drop, clicking on multiple elements with the control key, among others. These operations are performed using the advanced user interactions API

Find element: A command used to uniquely identify a web element within the web page. Returns the first matching web element if multiple web elements are discovered by the locator-it Throws NoSuchElementException if the element is not found it Detects a unique web element

FindElements: A command used to identify a list of web elements within the web page.Returns a list of multiple matching web elements, Returns an empty list if no matching element is found. It Returns a collection of matching elements

findElement(By by) method finds and returns the first matching element within the current context by the given mechanism. 

findElements(By by) finds and returns all matching elements within the current context by the given mechanism.

driver.switchTo().alert().getText();[or] There is a method in Alert interface which gives you the text of the alert box message. As below: Alert alert = driver.switchTo().alert(); alert.getText();

There is 2 different way to handle waits:
1. Implicit Waits
2. Explicit Waits

Implicit Waits: WebDriver waits for an element if they are not immediately available. So, WebDriver does not throw NoSuchElementException immediately. This is known as implicitlyWait(). This can be achieved using: driver.manage().timeouts().implicitlyWait(15, TimeUnit.SECONDS);

Note: If the DOM is get loaded within the wait time it should not wait for the remaining time it
go for the next step,
for an example:Here we wait for 20 seconds, after that it gives NoSuchElementException. If the element present in 5 second then it should not wait for another 15 seconds.

Disadvantages:
(i) In any case, it blindly wait for given seconds.
(ii) Once set, the implicit wait is set for the life of the WebDriver object instance.

Explicit Waits

A. Thread.sleep() : This is to wait the running program for sometime, this can be done using:Ex.- Thread.sleep(3600).

Disadvantages:

(i) In this case, it again blindly wait for given seconds. It is not a better way because if the the
element is present within this given wait time, program does not move further until the wait time
finished.
(ii) Some computers are slow, an element may not show up in a slow computer within the given
wait time.
(iii).It is sleep time for script, not good way to use in script as it sleeps without condition.

Expected Condition

An explicit wait is code you define to wait for a certain condition to occur before proceeding further in the code. There are some convenient methods provided that help you to write code that will wait only as long as required. WebDriverWait in combination with ExpectedCondition is one way this can be accomplished.
Ex.- WebDriverWait wait = new WebDriverWait(driver, 15);
WebElement element = wait.until(ExpectedConditions.presenceOfElementLocated(By.id()));

This waits up to 15 seconds before throwing a TimeoutException or if it finds the element will
return it in 0 – 15 seconds. WebDriverWait by default calls the ExpectedCondition every 500 milliseconds until it returns successfully. A successful return is for ExpectedCondition type is Boolean return true or not null return value for all other ExpectedCondition types. There are some common conditions that are frequently come across when automating web browsers. Listed below are Implementations of each. Java happens to have convienence methods so you don’t have to code an ExpectedCondition class yourself or create your own utility package for them.

– Element is Clickable – it is Displayed and Enabled.
Ex.- WebDriverWait wait = new WebDriverWait(driver, 15);
WebElement element = wait.until (ExpectedConditions.elementToBeClickable(By.id()));

3. Fluent Wait Command: Each FluentWait instance defines the maximum amount of time to wait for a condition, as well as the frequency with which to check the condition. Furthermore, the user may configure the wait to ignore specific types of exceptions whilst waiting, such as ‘NoSuchElementExceptions’ when searching for an element on the page.
Ex.- Wait wait = new FluentWait(driver)
.withTimeout(30, SECONDS)
.pollingEvery(5, SECONDS)
.ignoring(NoSuchElementException.class);
wait.until(ExpectedConditions.elementToBeClickable(By.id()));

Hard Assertion : A hard assert throw AssertException immediately after a test fails and the test is marked as failed.

assertEquals,assertNotEquals,assertTrue,assertFalse assertNull,assertNotNull 

Soft Assertion : now the failed assertions will be reported in the testNg report and not making the test to abort anywhere.

we have to include it’s corresponding class (as SoftAssert()) in the script.

getWindowHandle() returns the window handle of currently focused window/tab.

getWindowHandles() returns all windows/tabs handles launched/opened by same driver instance including all parent and child window.

Return type of getWindowHandle() is String while return type of getWindowHandles() is Set<String>. The return type is Set as window handle is always unique. In chrome and Firefox , Each tab in a window will have unique window handles. So getWindowHandles() will return handles for all tabs of a window. For example:- If there are four tabs in a window is opened, getWindowHandles() method will give “four” for chrome and firefox browsers. I am not sure about IE and EDGE. I will bring this in a new post. getWindowHandles() internally uses LinkedHashSet. So whatever Set it returns, it will give window handles in order it is opened.

WebDriverWait is applied on certain element with defined expected condition and time. This wait is only applied to the specified element. This wait can also throw exception when element is not found. Webdriver doesn't perform pooling for this wait scenario.

WebDriverWait wait = new WebDriverWait (driver, 20);
wait.until(ExpectedConditions.VisibilityofElementLocated(By.xpath("//button[@value='Save
Changes']")));

Fluent wait is another type of Explicit wait and you can define polling and ignore the exception to continue with script execution in case element is not found. Here, we can set pooling time, which isn't possible in Webdriver wait.

new FluentWait<WebDriver>(driver).withTimeout(30, TimeUnit.SECONDS).pollin"

Cucumber is a Behavior Driven Development (BDD) tool. Cucumber is a tool that executes plain-text functional descriptions as automated tests. The language that Cucumber understands is called Gherkin.

In BDD, users (business analysts, product owners) first write scenarios or acceptance tests that describes the behavior of the system from the customer’s perspective, for review and sign-off by the product owners before developers write their codes.

 Feature file in cucumber consist of parameters or conditions required for executing code, they are:

·Feature
·Scenario
·Scenario Outline
·Given
·When
·Then

“Given” keyword is used to specify a precondition for the scenario. When a keyword is used to specify an operation to be performed. The keyword is used to specify the expected result of a performed action. “And” keyword is used to join one or more statements together into a single statement.

In a single execution, Scenario is executed only once.

Scenario: Eat 5 out of 12
Given there are 12 cucumbers
When I eat 5 cucumbers
Then I should have 7 cucumbers

Scenario: Eat 5 out of 20
Given there are 20 cucumbers
When I eat 5 cucumbers

Then I should have 15 cucumbers whereas Scenario outline (For similar data trace) can be executed multiple times depending upon the data provided as Example.

Scenario Outline: Eating
Given there are <start> cucumbers
When I eat <eat> cucumbers
Then I should have <left> cucumbers
Examples:
| start | eat | left |
| 12 | 5 | 7 |
| 20 | 5 | 15 |

Throw is a keyword which is used to throw an exception explicitly in the program inside a function or inside a block of code.

throw only single exception at a time i.e we cannot throw multiple exception with throw keyword.Syntax wise throw keyword is followed by the instance variable. keyword is used within the method.

Throws is a keyword used in the method signature used to declare an exception which might get thrown by the function while executing the code.

we can declare multiple exceptions with throws, On other hand syntax wise throws keyword is followed by exception class names.used with the method signature."

Final is used to apply restrictions on class, method and variable. Final class can’t be inherited, final method can’t be overridden and final variable value can’t be changed.Final is a keyword.

Finally is used to place important code, it will be executed whether exception is handled or not.Finally is a block.

Finalize is used to perform clean up processing just before object is garbage collected.Finalize is a method.

There are two types of polymorphism in java:

1) Static Polymorphism also known as compile time polymorphism

2) Dynamic Polymorphism also known as runtime polymorphism

Compile time Polymorphism (or Static polymorphism) Polymorphism that is resolved during compiler time is known as static polymorphism. Method overloading is an example of compile time polymorphism.

Method Overloading: This allows us to have more than one method having the same name, if the parameters of methods are different in number, sequence and data types of parameters.

Runtime Polymorphism (or Dynamic polymorphism) It is also known as Dynamic Method Dispatch. Dynamic polymorphism is a process in which a call to an overridden method is resolved at runtime, thats why it is called runtime polymorphism.

Interface: Interface can have only abstract methods. Since Java 8, it can have default and
static methods also.
Variables declared in a Java interface is by default final
Members of a Java interface are public by default
interface should be implemented using keyword “implements”
An interface can extend another Java interface only

Abstract : Abstract class can have abstract and non-abstract methods.
An abstract class may contain non-final variables.
A Java abstract class can have the usual flavors of class members like private, protected, etc..
A Java abstract class should be extended using keyword “extends”. an abstract class can extend
another Java class and implement multiple Java interfaces."

Encapsulation in Java is a mechanism of wrapping the data (variables) and code acting on the data (methods) together as a single unit. In encapsulation, the variables of a class will be hidden from other classes, and can be accessed only through the methods of their current class. Therefore, it is also known as data hiding.

  •  foreach() method in Iterable interface
  •  default and static methods in Interfaces Functional Interfaces and Lambda Expressions
  • Java Stream API for Bulk Data Operations on Collections
  •  Java Time API Collection API improvements
  •  Concurrency API improvements
  •  Java IO improvements
  •  Miscellaneous Core API improvements"

Steps to write an program:

1. Define a string.
2. Two loops will be used to find the duplicate characters. Outer loop will be used to select
a character and initialize variable count by 1.
3. Inner loop will compare the selected character with rest of the characters present in the
string.
4. If a match found, it increases the count by 1 and set the duplicates of selected character
by ‘0’ to mark them as visited.
5. After inner loop, if count of character is greater than 1, then it has duplicates in the string.


public class DuplicateCharacters {
public static void main(String[] args) {
String string1 = "Great responsibility";
int count;

//Converts given string into character array
char string[] = string1.toCharArray();
System.out.println("Duplicate characters in a given string: ");
//Counts each character present in the string
for(int i = 0; i<string.length; i++) {
count = 1;
for(int j = i+1; j <string.length; j++) {
if(string[i] == string[j] && string[i] != &’ ‘) {
count++;
//Set string[j] to 0 to avoid printing visited character
string[j] = &’0′;
}
}
//A character is considered as duplicate if count is greater than 1
if(count > 1 && string[i] != '0')
System.out.println(string[i]);
}
}
}

Using the new keyword- The constructor gets called
Employee emp1 = new Employee(); 

Using newInstance() method of Class class- The constructor gets called
Employee emp2 = Employee.class.newInstance();

Using newInstance() method of Constructor class- The constructor gets called
Constructor<Employee> constructor = Employee.class.getConstructor();
Employee emp3 = constructor.newInstance();

Using clone() method- No constructor call
Employee emp4 = (Employee) emp3.clone();

Using deserialization- No constructor call
ObjectInputStream in = new ObjectInputStream(new FileInputStream(""data.obj""));
Employee emp5 = (Employee) in.readObject();"

1.Explain the testing life cycle and different phases involved.
  • The testing life cycle typically includes planning, test case development, environment setup, execution, defect reporting, and closure. Each phase ensures systematic testing to achieve quality goals.
  • Verification ensures the software meets specifications and requirements (Are we building the product right?), while validation ensures the software meets the user’s needs and expectations (Are we building the right product?).
  • Discuss various types such as functional testing (unit, integration, system, acceptance), non-functional testing (performance, usability, security), and specialized testing (regression, smoke, exploratory).
  • Prioritization is based on risk analysis, business impact, and criticality of features. Techniques like risk-based testing, impact analysis, and test coverage metrics help in making informed decisions.
  • Start with requirements analysis to identify testable scenarios. Use techniques like equivalence partitioning, boundary value analysis, and decision tables to design comprehensive and effective test cases.
  • Implement regression test suites that focus on critical functionalities and frequently changing areas. Use automation tools and integrate regression testing into continuous integration pipelines for efficiency.
  • Immediately report the bug with clear steps to reproduce. Collaborate with developers to understand the root cause and potential impact. Prioritize fixes based on severity and impact on release timelines.
  • Discuss a specific project, highlighting challenges such as complex requirements, tight deadlines, or resource constraints. Explain your problem-solving approach, collaboration with the team, and the outcome achieved.
  • Regular status updates, clear bug reports with reproducible steps and screenshots, attending daily stand-ups, and participating in requirement reviews are key to maintaining effective communication.
  • Automation accelerates repetitive tests, improves test coverage, and enhances regression testing efficiency. It’s crucial for continuous integration and delivery (CI/CD) pipelines to achieve faster releases with higher quality.
  • Attend conferences, webinars, and workshops. Engage in online forums and communities. Follow industry blogs and  publications. Continuous learning ensures awareness of new tools, techniques, and best practices.
  • Emphasize the importance of open communication and understanding different perspectives. Discuss how you facilitated discussions, considered stakeholders’ viewpoints, and reached a consensus that aligned with project goals

Synchronization ensures that WebDriver waits for elements to be available before
performing actions. Techniques include: 

o Implicit Wait: Set at the beginning to wait for a certain amount of time before
throwing an exception.

o Explicit Wait: Use WebDriverWait with ExpectedConditions to wait for specific
conditions (element to be clickable, visible, etc.).

o Fluent Wait: Customizable wait that polls the DOM for a certain duration with a
specified frequency.

  • Use explicit waits to handle dynamic elements that load asynchronously. Wait for elements to become visible or clickable before interacting with them. Use JavaScriptExecutor to handle AJAX calls and update the DOM.
  •  Discuss the architecture of the framework (e.g., Page Object Model, TestNG/JUnit integration, reporting mechanisms).
  •  Consider aspects like maintainability, scalability, and reusability of code.
  •  Include strategies for handling test data, environment configurations, and integration with CI/CD pipelines.
  •  Maintain a configuration file or properties file specifying browser types and versions.
  •  Use WebDriver's capabilities to launch different browsers (Chrome, Firefox, Safari, etc.).
  •  Implement browser-specific profiles or settings as needed (e.g., ChromeOptions, FirefoxOptions).
  • Provide a specific example where a test failed due to environment issues, browser compatibility, or application changes.
  •  Discuss your approach to troubleshooting, including analyzing logs, using browser developer tools, and collaborating with developers.
  •  Highlight how you identified the root cause and implemented a solution to prevent future failures.
  •  Use WebDriver's switchTo().alert() method to handle JavaScript alert, confirmation, or prompt dialogs.
  •  For basic authentication pop-ups, pass credentials in the URL (http://username:password@url) or use third-party browser extensions.
  •  Annotations like @Test, @BeforeTest, @AfterTest in TestNG (or @Test, @Before, @After in JUnit) are used to define test methods, setup, and teardown methods.
  •  @DataProvider and @Parameters annotations facilitate data-driven testing by providing test data.
  •  Set up a Selenium Grid hub and multiple nodes, each configured with different browsers and operating systems.
  •  Specify desired capabilities to target specific nodes for parallel execution using TestNG or JUnit parallel execution features.
  •  Use data-driven testing techniques with Excel, CSV files, databases, or property files.

  •  Implement data providers with TestNG (@DataProvider) or JUnit to feed test data into test methods dynamically.

  •  Maintain separation between test data and test logic for better maintainability.

  •  Use tools like Jenkins, Bamboo, or GitLab CI to automate test execution.
  •  Trigger Selenium tests automatically on code commits or scheduled builds.
  •  Generate test reports and artifacts for visibility and analysis within the CI/CD environment.
  •  Selenium is primarily used for web UI testing, but it can be extended for API testing using libraries like RestAssured.
  •  For mobile testing, consider Appium, which is based on Selenium and supports testing of native, hybrid, and mobile web applications.
  • Share knowledge through mentoring, conducting workshops, or writing technical documentation.
  • Implement best practices like code reviews, continuous integration, and automated testing standards.
  •  Collaborate with developers and QA engineers to establish guidelines for test automation frameworks and test case design

1. What is API testing?

API testing is a type of software testing that involves testing the application programming interfaces (APIs) directly. It focuses on testing the communication between different software components and ensuring that they function correctly, reliably, and securely.

 Common types of APIs include RESTful APIs, SOAP APIs, and GraphQL APIs. RESTful APIs are most commonly used for web services due to their simplicity and flexibility.

 I have used tools like Postman, SOAP UI, and JMeter for API testing. These tools provide a user-friendly interface for sending requests, validating responses, and automating tests

Key elements to consider include:

  • Input data
  • Endpoint URL
  • HTTP methods (GET, POST, PUT, DELETE)
  • Request headers
  • Request parameters
  • Expected response status codes
  • Expected response data
  • Error handling scenarios

Authentication and authorization are important aspects of API testing. I typically handle them by including authentication tokens or API keys in the request headers. Additionally, I verify that the API endpoints enforce proper authorization by testing different user roles and permissions.

1. Can you explain the difference between SOAP and RESTful APIs?

 SOAP (Simple Object Access Protocol) is a protocol for exchanging structured information in the implementation of web services. It relies heavily on XML for message format and typically uses HTTP or SMTP as the transport protocol. REST (Representational State Transfer), on the other hand, is an architectural style for designing networked applications. It uses standard HTTP methods like GET, POST, PUT, and DELETE for communication and typically returns data in JSON or XML format. RESTful APIs are simpler, more lightweight, and more flexible compared to SOAP APIs.

 API versioning is important for maintaining backward compatibility and ensuring that changes to the API do not break existing clients. I typically handle API versioning by including version numbers in the endpoint URLs or request headers. Additionally, I maintain separate test suites for different API versions to ensure that changes are thoroughly tested without affecting existing functionality.

 I use various strategies for API test automation, including:

  • Writing test scripts using libraries like RestAssured (for Java) or Requests (for Python)
  • Using testing frameworks like Postman or SoapUI for creating automated test suites
  • Integrating API tests into continuous integration pipelines for automated regression testing
  • Implementing data-driven testing by parameterizing test data and using loops to iterate over multiple test cases

 Testing APIs in a microservices architecture requires a different approach compared to monolithic applications. I typically focus on testing individual microservices in isolation using contract testing techniques like consumer-driven contract testing. Additionally, I ensure that APIs are well-documented and adhere to standards like OpenAPI (formerly Swagger) to facilitate communication and collaboration between teams.

 One challenging scenario I encountered was testing an API that relied heavily on third-party dependencies, which were not always reliable or available for testing. To address this, I implemented stubs or mock servers to simulate the behaviour of external dependencies during testing. I also worked closely with the development team to identify and isolate potential points of failure, and prioritise testing efforts based on criticality and risk. Ultimately, by implementing a combination of techniques and collaborating effectively with stakeholders, we were able to successfully test the API under challenging conditions.

1. What is JMeter?

Apache JMeter is an open-source tool used for performance and load testing. It is designed to measure and analyse the performance of web applications and services.

 The key components of a JMeter test plan include Thread Group, Samplers, Logic Controllers, Listeners, Timers, Assertions, and Configuration Elements.

 A Thread Group in JMeter represents a group of users that simulate virtual users sending requests to the target server. It controls the number of threads (users) and the duration of the test.

Samplers in JMeter are responsible for generating requests to the target server. They simulate different types of requests such as HTTP requests, FTP requests, JDBC requests, etc.

 Listeners in JMeter are used to collect and display the test results. They provide visual representations of the test data in the form of tables, graphs, or trees.

1. How do you parameterize a JMeter test plan?

 Parameterization in JMeter allows for dynamic data input during test execution. This can be achieved using various elements like CSV Data Set Config, User Defined Variables, or functions like __Random().

 Logic Controllers in JMeter determine the order and repetition of samplers within a Thread Group. Examples include Simple Controller (for organising samplers), Loop Controller (for looping a set of samplers), and If Controller (for adding conditional logic).

Test results in JMeter can be analysed using Listeners such as View Results Tree, Summary Report, Aggregate Report, and Response Times Over Time. These listeners provide insights into response times, throughput, error rates, etc.

 Timers in JMeter are used to simulate realistic user behaviour by introducing delays between requests. They help in controlling the pacing of the test and avoid overwhelming the server with simultaneous requests.

 Distributed testing in JMeter involves running tests on multiple machines simultaneously to simulate a large number of users. This can be achieved by configuring the “Remote Testing” feature in JMeter and setting up the necessary master-slave architecture.

1. What is the TOSCA Tool?

TOSCA (Test Automation Suite by Tricentis) is a model-based test automation tool used for end-to-end functional testing, regression testing, and test management.

Key features of TOSCA include:

  • Model-based Test Automation
  • Scriptless Test Automation
  • Integrated Test Management
  • Continuous Testing
  • Risk-based Testing

TOSCA allows testers to create automated tests without writing code by using a model-based approach. Test cases are created using graphical representations of the application under test and its interactions.

  • TOSCA Commander: It is the main component used for creating and executing test cases. It provides a graphical interface for designing test cases.
  • TOSCA TBox: It is the repository for reusable test modules, test data, and test configurations. TOSCA TBox enables efficient test case management and reuse.

Model-based Testing in TOSCA involves creating a graphical representation (model) of the application under test and its functionalities. Test cases are derived directly from this model, enabling testers to focus on test design rather than script development.

1. How does TOSCA support Continuous Testing?

TOSCA integrates with Continuous Integration (CI) and Continuous Delivery (CD) pipelines, allowing automated tests to be executed automatically as part of the software delivery process. This ensures that quality is maintained throughout the development lifecycle.

 Risk-based Testing in TOSCA involves prioritising test cases based on the perceived risk associated with different features or functionalities of the application. This ensures that testing efforts are focused on areas of the application that are most critical to its success.

TOSCA provides various mechanisms for handling dynamic elements in test automation, such as dynamic IDs or content. Techniques include using regular expressions, parameterization, and dynamic waits to ensure robust and reliable test execution.

 Advantages of using TOSCA for test automation include:

  • Faster test creation and maintenance with scriptless automation
  • Improved test coverage through model-based testing
  • Enhanced collaboration and reusability with integrated test management
  • Support for Continuous Testing and DevOps practices

TOSCA provides various integration options, including APIs and plugins, to integrate with other testing tools, frameworks, and third-party applications. For example, TOSCA can integrate with Jenkins for Continuous Integration, JIRA for issue tracking, and Selenium for web automation.

1. What is Playwright?

 Playwright is an open-source Node.js library for automating web browsers, allowing developers to write tests for web applications.

 Playwright offers features like cross-browser testing, automated testing of desktop and mobile browsers, support for multiple programming languages, and robust debugging capabilities.

 Playwright provides a more modern and powerful API, better support for browser automation tasks, including handling multiple tabs and frames, and it supports multiple browsers out of the box.

Selectors are used in Playwright to identify elements on a web page, similar to XPath or CSS selectors. Playwright offers various selector strategies, including CSS selectors, XPath, and text content-based selectors.

 Advantages include support for multiple programming languages, cross-browser testing, improved reliability, and faster test execution due to its built-in parallel execution support.

1. How would you handle authentication pop-ups in Playwright?

 Authentication pop-ups can be handled using the authenticate method provided by Playwright, where you can provide credentials to authenticate against the pop-up.

Playwright allows file uploads using the input[type=file] element. You can use the setInputFiles method to set the file path for file input elements.

Playwright supports parallel test execution out of the box. You can use test runners like Jest or Mocha along with Playwright to run tests in parallel by configuring the test runner accordingly.

Best practices include writing modular and maintainable code, using descriptive test names, minimising the use of sleep statements, leveraging selectors efficiently, and organising tests into logical groups.

Flaky tests can be addressed by investigating the root cause of the flakiness, adding appropriate waiting strategies, using retries with exponential backoff, and ensuring test environment stability.

Playwright tests can be integrated into a CI/CD pipeline by setting up a CI server like Jenkins or Travis CI, configuring the pipeline to install dependencies and run tests using appropriate commands, and generating test reports for analysis.

1. What is ETL Testing?

 ETL Testing is a process of validating, verifying, and ensuring that the data extracted from various sources, transformed according to business rules, and loaded into the target database is accurate and complete.

 The different types of ETL testing are:

  • Data completeness testing
  • Data transformation testing
  • Data quality testing
  • Performance testing
  • Regression testing

Some challenges in ETL testing include:

  • Handling large volumes of data
  • Dealing with complex transformations
  • Ensuring data integrity across different systems
  • Identifying and handling data anomalies

Data validation involves comparing the source data with the transformed data to ensure accuracy. This can be done using SQL queries or by using ETL testing tools to validate the data.

As a fresher, I have primarily used tools like Informatica Data Validator, Talend Data Quality, or IBM InfoSphere DataStage for ETL testing purposes.

1. Can you explain the difference between ETL testing and Database testing?

ETL testing focuses on validating data movement and transformation processes from source to target systems, ensuring data completeness, accuracy, and integrity. Database testing, on the other hand, primarily involves testing the database schema, stored procedures, triggers, and functions for data consistency and correctness within the database itself.

Incremental loads involve loading only the changed or new data since the last ETL run. In ETL testing, we verify that the incremental load process correctly identifies and loads only the delta changes into the target system, ensuring data consistency and minimising processing time.

Yes, performance issues such as slow data processing, long load times, or resource bottlenecks can occur in ETL testing. To address them, we can optimise SQL queries, fine-tune ETL processes, use partitioning or indexing techniques, and scale hardware resources as needed.

 Ensuring data quality involves validating data accuracy, completeness, consistency, and integrity throughout the ETL process. We achieve this by implementing data quality checks, data profiling, data cleansing techniques, and using validation rules and constraints.

 Regression testing in ETL projects involves retesting the entire ETL process or specific components after changes or enhancements to ensure that existing functionality remains unaffected. We typically create regression test suites covering various scenarios and use automated testing tools to streamline the regression testing process.

1. What is AWS?

 AWS (Amazon Web Services) is a cloud computing platform provided by Amazon.com that offers a wide range of services, including computing power, storage, databases, machine learning, and more.

 Core services of AWS include EC2 (Elastic Compute Cloud), S3 (Simple Storage Service), RDS (Relational Database Service), and IAM (Identity and Access Management).

 EC2 (Elastic Compute Cloud) is a web service that provides resizable compute capacity in the cloud. It allows users to launch and manage virtual servers, called instances, on-demand.

S3 (Simple Storage Service) is an object storage service offered by AWS for storing and retrieving any amount of data from anywhere on the web. It is highly scalable, durable, and secure.

IAM (Identity and Access Management) is used to securely control access to AWS services and resources. It allows users to create and manage users, groups, and permissions to access various AWS resources.

1. Explain the differences between EC2 and S3.

 EC2 provides resizable compute capacity, allowing users to launch and manage virtual servers, while S3 is an object storage service for storing and retrieving data. EC2 is used for running applications and services, while S3 is used for storing data objects such as files and backups.

 VPC is a virtual network dedicated to your AWS account. It allows you to launch AWS resources, such as EC2 instances and RDS databases, into a virtual network that you define. VPC provides advanced networking features such as subnets, route tables, and security groups.

 AWS provides several security mechanisms, including IAM for controlling access, VPC for network isolation, Security Groups for firewall rules, and encryption options for data protection. Additionally, enabling Multi-Factor Authentication (MFA) and regularly auditing access permissions are essential for enhancing security.

 RDS (Relational Database Service) is a managed relational database service that supports multiple database engines like MySQL, PostgreSQL, and SQL Server. DynamoDB, on the other hand, is a fully managed NoSQL database service provided by AWS. It is optimised for high-performance, low-latency applications and offers seamless scalability.

AWS Lambda is a serverless computing service that allows users to run code without provisioning or managing servers. Users can upload their code as Lambda functions, which are triggered by various AWS events such as API calls, file uploads, or database updates. Lambda automatically scales and manages the infrastructure required to run the code.

  

1. What is DevOps?

DevOps is a set of practices that combines software development (Dev) and IT operations (Ops). It aims to shorten the systems development life cycle and provide continuous delivery with high software quality.

The key components of DevOps include continuous integration, continuous delivery, automated testing, infrastructure as code (IaC), and monitoring and logging.

 Popular DevOps tools include Git, Jenkins, Docker, Kubernetes, Ansible, Terraform, and Prometheus.

 Version control is a system that records changes to a file or set of files over time so that you can recall specific versions later. It’s important in DevOps for collaboration, tracking changes, and ensuring code integrity.

 Continuous Integration is the practice of automatically building and testing code changes frequently, while Continuous Deployment is the process of automatically deploying code changes to production after passing tests in a CI pipeline.

1. How do you ensure the security of infrastructure as code (IaC) scripts?

 Security in IaC can be ensured by using secure coding practices, implementing role-based access control (RBAC), regularly scanning for vulnerabilities, and using tools like HashiCorp Vault for secret management.

 Blue-Green Deployment involves running two identical production environments where one is active (blue) and the other is idle (green). Canary Deployment is a pattern where new code changes are gradually rolled out to a small subset of users before being deployed to the entire infrastructure.

Containerization offers benefits such as isolation, scalability, consistency across environments, resource efficiency, and faster deployment times.

 Orchestration of containers in a clustered environment is typically handled by tools like Kubernetes, which automate deployment, scaling, and management of containerized applications.

Infrastructure as Code is the practice of managing infrastructure through machine-readable definition files. Its advantages include consistency, repeatability, scalability, and the ability to version control infrastructure configurations.

 I have experience setting up CI/CD pipelines using tools like Jenkins, GitLab CI/CD, or CircleCI. These pipelines automate the building, testing, and deployment of code changes, ensuring rapid and reliable software delivery.

 Incident response involves setting up monitoring tools like Prometheus, Grafana, or ELK Stack to detect anomalies and trigger alerts. We also establish incident response procedures and conduct post-mortems to identify and mitigate root causes.

1. What is Linux?

Linux is an open-source operating system that is based on the Linux kernel. It was developed by Linus Torvalds in 1991 and is widely used in servers, desktop computers, mobile devices, and embedded devices.

The key components of the Linux operating system include the kernel, shell, system libraries, graphical server, and various utility programs. The kernel is the core of the operating system, managing hardware resources and providing essential services. The shell is the command-line interface that allows users to interact with the system. System libraries contain functions that applications can use, and the graphical server provides a windowing environment for desktop users.

Supervised learning involves training a model on labelled data, where the target variable is known, while unsupervised learning involves finding patterns and structure in unlabeled data.

Some of the popular shells in Linux include Bash (Bourne Again Shell), Zsh (Z Shell), Ksh (Korn Shell), and Dash. These shells offer different features and syntax but serve the same fundamental purpose of providing a command-line interface for users.

You can use the df command to check the disk usage in Linux. Running df -h will display disk usage information in a human-readable format, showing disk space usage, available space, and filesystem type for each mounted partition.

An inode is a data structure in a Unix-like file system that stores metadata information about a file or directory, such as permissions, ownership, timestamps, and file size. Inodes are used to uniquely identify files on a disk and are crucial for the filesystem to manage files efficiently.

You can use the find command to search for files in Linux based on various criteria such as filename, size, permissions, and modification time. For example, find /path/to/directory -name "*.txt" will search for all files with the .txt extension in the specified directory.

Hard links and symbolic links are two types of links used to reference files in Linux. A hard link is a pointer to the physical location of a file on disk, whereas a symbolic link is a pointer to the file's pathname. One key difference is that hard links cannot reference directories, while symbolic links can. Additionally, deleting the original file does not affect a hard link, but it breaks a symbolic link.

You can use commands like hostnamectl, uname, and lscpu to check system information in Linux. hostnamectl provides information about the system hostname and operating system, uname displays the system kernel and architecture information, and lscpu shows CPU details.

Linux uses various firewall tools such as iptables, ufw, and firewalld to set up and configure firewall rules. You can use these tools to define rules for allowing or blocking incoming and outgoing network traffic based on criteria like IP addresses, ports, and protocols.

Cron is a time-based job scheduler in Unix-like operating systems that allows users to schedule tasks to run at specified intervals or times. Users can create cron jobs by editing the crontab file to automate repetitive tasks like backups, updates, and maintenance.

You can use the systemctl command to check the status of a service in Linux. For example, systemctl status sshd will display the status of the SSH service, showing whether it is running or stopped and any error messages if applicable.

Log files in Linux are typically stored in the /var/log directory. You can use tools like tail, less, and grep to view and search log files. For example, tail -f /var/log/messages will display the last few lines of the messages log file and continuously update as new log entries are added.

Grep and sed are both text processing tools in Linux, but they serve different purposes. grep is used to search for specific patterns or text in files, while sed is used to perform text transformations, such as search and replace operations, on the content of files.

You can add a new user in Linux using the useradd command. For example, sudo useradd -m newuser will create a new user account with a home directory. You can then set a password for the new user using the passwd command.

The chmod command in Linux is used to change the permissions of files or directories. Permissions can be set for the file owner, group members, and others to control who can read, write, or execute a file. For example, chmod 644 file.txt will give read and write permission to the owner and read-only permission to group members and others.

You can use commands like ps, pgrep, and kill to find and kill a specific process in Linux. For example, ps aux | grep processname will display information about a process matching the specified name, and kill PID will terminate the process identified by its process ID.

SSH (Secure Shell) is a network protocol that allows users to securely connect to and manage remote systems. It provides encrypted communication for secure data transfer, remote command execution, and tunneling of network services. Users can use the ssh command to establish SSH connections to remote servers.

The rsync command in Linux is used for file synchronization and data transfer between systems. It can copy files locally or over a network while preserving file attributes and only transferring the differences between files to optimize performance. rsync is commonly used for backups, mirroring, and remote file synchronization.

You can use commands like ip, ifconfig, and route to check network configuration in Linux. For example, ip addr show will display information about network interfaces and IP addresses configured on the system, while route -n will show the routing table. In conclusion, Linux is a versatile operating system with a vast array of features and capabilities. Being knowledgeable about these key concepts and commands will help you excel in Linux-related interviews and demonstrate your proficiency with the system. Practice using these commands in a Linux environment to gain hands-on experience and be better prepared for your next Linux interview.

1. What is the difference between hard links and symbolic links?
  • Hard links:  These are pointers to the inode (metadata) of a file on the disk. Deleting the original file does not affect hard-linked files as they reference the same data.
  • Symbolic links (symlinks): These are references to the path of a file or directory. They can point to files or directories across filesystems and can be easily identified by the l in the file type field when using ls -l.
  • Use tools like top, htop, or glances to monitor real-time system performance (CPU, memory, processes). For longer-term analysis, use tools like vmstat, sar, or iostat.
  • Systemd units are configuration files that define services, sockets, devices, and other entities managed by systemd. They are typically found in /etc/systemd/system and can be managed using commands like systemctl (start, stop, enable, disable , status).
  • Use firewall management tools like iptables (traditional) or firewalld (more user- friendly and dynamic). For example, sudo iptables -A INPUT -p tcp –dport 80 – j ACCEPT allows incoming traffic on port 80 (HTTP).

File permissions in Linux are represented as read (r), write (w), and execute (x) for the owner, group, and others. Use commands like chmod to change permissions (chmod 644 file sets read/write for owner, read for group and others).

  • Use tools like ping to check connectivity, traceroute to trace the route packets take, netstat or ss to examine network connections, and ifconfig or ip to manage network interfaces.
  • Use cron jobs (crontab), but also consider using systemd timers for more advanced scheduling. Shell scripting (using Bash or other shells) combined with tools like awk, sed, and grep can automate complex tasks.
  • Use tools like pvcreate, vgcreate, lvcreate to manage physical volumes, volume groups, and logical volumes respectively. Commands like lvextend, lvreduce, lvresize allow resizing logical volumes on the fly.
  • Edit /etc/ssh/sshd_config to disable root login (PermitRootLogin no), use SSH keys (PasswordAuthentication no), and restrict SSH access (AllowUsers or AllowGroups). Restart SSH daemon after making changes (sudo systemctl restart sshd).
  • Use a live CD or USB to boot into a live environment. From there, mount the root filesystem, check logs (journalctl -xe), and repair the bootloader (grub-install or boot-repair for GRUB).

1. What is Power BI, and what are its components?

 Power BI is a business analytics tool developed by Microsoft. Its components include Power Query, Power Pivot, Power View, and Power Map.

 Power BI Desktop is a desktop application used for creating reports and dashboards, while Power BI Service is a cloud-based service for publishing, sharing, and collaborating on reports.

 Data can be imported into Power BI using Power Query, which allows you to connect to various data sources such as Excel, SQL Server, CSV files, and web data sources.

DAX (Data Analysis Expressions) is a formula language used in Power BI to create calculated columns and measures. It’s important because it allows users to perform complex calculations and analysis on their data.

To create a calculated column, you can use the “New Column” option in Power Query Editor and write a DAX expression to define the calculation.

1. Explain the difference between calculated columns and measures in Power BI.

Calculated columns are computed during data refresh and stored in the data model, while measures are computed at query time based on the user’s interaction with the report.

Performance optimization techniques include reducing the number of visuals on a page, minimising data model size, optimising DAX expressions, and using query folding where possible.

Row-level security allows you to restrict access to specific rows of data based on the user’s role or identity. It can be implemented using DAX expressions or Power BI Service settings.

Date calculations can be performed using DAX functions such as DATEADD, DATESBETWEEN, and CALENDAR. It’s important to create a date/calendar table in the data model to support these calculations.

Power BI reports can be shared via Power BI Service by publishing them to the Power BI cloud service. They can also be shared via email, embedded in websites, or exported to other formats such as PDF or PowerPoint.

1. What is Salesforce?

 Salesforce is a cloud-based customer relationship management (CRM) platform that allows organisations to manage their sales, marketing, customer service, and more in a centralised system accessible via the internet.

Salesforce offers three primary cloud offerings: Sales Cloud, Service Cloud, and Marketing Cloud. Sales Cloud is used for sales automation, Service Cloud for customer service, and Marketing Cloud for marketing automation.

A Lead is a potential customer who has shown interest in your product or service but hasn’t yet been qualified as an opportunity. An Opportunity is a qualified Lead with a higher chance of closing a deal.

A Profile defines what a user can do in Salesforce, including the objects they can access and the permissions they have. A Role defines a user’s position in the hierarchy and determines the records they have access to based on that hierarchy.

 A Workflow Rule is a set of criteria that automatically triggers an action in Salesforce, such as sending an email, creating a task, or updating a field value, when certain conditions are met.

1. Can you explain the difference between a Standard Object and a Custom Object in Salesforce?

Standard objects are pre-built objects provided by Salesforce, such as Account, Contact, and Opportunity. Custom objects, on the other hand, are objects created by users to store data specific to their organisation’s needs.

Governor Limits are runtime limits enforced by Salesforce to ensure the stability and performance of its multitenant architecture. They include limits on data storage, API requests, and processing time. Handling them involves writing efficient code, using batch processing, and optimising queries.

Apex is a programming language used to write custom business logic and perform complex calculations in Salesforce. Visualforce, on the other hand, is a markup language used to create custom user interfaces. While Apex runs on the Salesforce server, Visualforce pages are rendered on the client side.

Triggers are pieces of Apex code that execute before or after data manipulation events, such as insert, update, or delete operations. Workflow Rules, on the other hand, are declarative automation tools used to automate standard internal procedures. Triggers are more powerful and flexible but require development skills, while Workflow Rules are easier to configure but have limitations.

Bulk data loading in Salesforce can be done using tools like Data Loader or through the Salesforce API’s Bulk API. It involves preparing data in CSV format, mapping fields, and ensuring data quality before loading.

1. What is Data Science and why is it important?

 Data Science is a multidisciplinary field that uses scientific methods, algorithms, and systems to extract insights and knowledge from structured and unstructured data. It is important because it helps businesses make data-driven decisions, uncover hidden patterns, and gain a competitive edge.

Python and R are the most commonly used programming languages in Data Science due to their extensive libraries for data manipulation, analysis, and machine learning.

Supervised learning involves training a model on labelled data, where the target variable is known, while unsupervised learning involves finding patterns and structure in unlabeled data.

Regression problems involve predicting a continuous value, while classification problems involve predicting a categorical label.

Overfitting occurs when a model learns the training data too well and performs poorly on unseen data. To prevent overfitting, techniques like cross-validation, regularisation, and using more data can be employed.

1. Can you explain the Bias-Variance Tradeoff?

The Bias-Variance Tradeoff refers to the balance between the bias of a model (error due to overly simplistic assumptions) and the variance (sensitivity to small fluctuations in the training data). Finding the right balance is crucial to building models that generalise well to unseen data.

Techniques for feature selection include filtering methods (e.g., correlation analysis), wrapper methods (e.g., forward/backward selection), and embedded methods (e.g., Lasso regression).

Missing data can be handled by imputation (replacing missing values with a calculated estimate), deletion (removing rows or columns with missing values), or using algorithms that can handle missing data directly.

Regularisation is a technique used to prevent overfitting by adding a penalty term to the model’s cost function, which discourages overly complex models. Common types of regularisation include L1 (Lasso) and L2 (Ridge) regularisation.

A/B testing is used to compare two or more versions of a product or service to determine which one performs better. It is commonly used in marketing and product development to make data-driven decisions and optimise performance.

Facebook Instagram YouTube linkedin

Register Now

Best software testing course in Hyderabad