Key terms and concepts
Functional testing
Functional testing asserts conditions to positively confirm expected behaviors to increase your confidence in the correctness of the configuration changes. It compares the expected and actual HTTP response or property settings value for a test request.
Here are the terms you may encounter when working with the functional testing.
Property version
Property versions refer to Property Manager property versions. To run a test for a property version, associate it to a test suite and create test cases. If you associate a test suite with a property version, Test Center looks for up to 5 previous versions of this property in Test Center and if found, propagates test cases from test suites created for these properties to your new test suite. You need to review these test cases and verify if they still apply. See also: Create a test suite, Run a regression test
Test suites
Test suites act as containers for test cases. You can add a name and description to a test suite to provide more details about the test suite and included test cases. You can also set if the test suite needs to be locked or stateful. Test suites can be tested as test objects associated to property versions or on their own. See also: Create a test suite, Run test for a test suite
Locked test suites
Locked test suites can be modified only by its editors and owners. Test Center users who create locked test suites automatically become their owners, but they can also designate other owners. Editors can edit a locked test suite (except for the Locked status), add new test cases to it, and remove those already included. If the test suite is also stateful, editors can reorder included test cases. Owners can additionally manage the test suite's edit access and remove it. Every Test Center user of your account can send a request for the edit access to test suite's owners. Owners are notified about the request with an email. Once they approve or reject the request, the user also gets a notification email. See also: Give the edit access to a locked test suite
Stateful test suites
In stateful test suites test cases can be executed in the order they are added and cookies and session information can be retained for subsequent test cases.
Test cases
A test case in functional testing is the basic unit of testing. It includes all settings for which the test needs to be run: a condition, a test request, and a client profile. You can combine a few test cases in a test suite or run a test for a simple case. See also: Create a test suite
Client profiles
Client profiles are constituents of test cases. They are combinations of a browser type, a geographical region, and an IP version that characterize what a client uses to make a test request. By default, the geographical region is set to US. See also: Create and add test cases
Conditions
Conditions are constituents of test cases. They are the criteria to be evaluated on the HTTP response corresponding to the test request or a specific property setting applied to the test request. For example, Response header foo
has a value that equals bar
or Caching option is no-store. See also: Create and add test cases
This is the list of available condition types together with their descriptions.
Condition type | Description |
---|---|
Response code | Verifies the occurrence of a response code. |
Response header | Verifies the occurrence of a response header. |
Content Provider Code | Verifies the settings of the Content Provider Code behavior in Property Manager. |
Cache key query parameters | Verifies the settings of the Cache key query parameters behavior in Property Manager. |
Caching | Verifies the settings of the Caching behavior in Property Manager. |
Ignore case in cache key | Verifies the settings of the Ignore case in cache key behavior in Property Manager. |
Last mile acceleration (Gzip compression) | Verifies the settings of the Last mile acceleration (Gzip compression) behavior in Property Manager. |
Log request details | Verifies the settings of the Log request details behavior in Property Manager. |
Origin server - Cache key hostname | Verifies the settings of the Cache key hostname behavior for the Origin server in Property Manager. |
Prefetch objects | Verifies the settings of the Prefetching behavior in Property Manager. |
Redirect | Verifies the settings of the Redirect behavior in Property Manager. |
SureRoute | Verifies the settings of the SureRoute behavior in Property Manager. |
Tiered distribution | Verifies the settings of the Tiered distribution behavior in Property Manager. |
Variable | Verifies the settings of the Set variable behavior in Property Manager. |
Alerted rules | Verifies whether an alerted rule in the security configuration is applied to the test request. |
Denied rule | Verifies whether a denied rule in the security configuration is applied to the test request. |
Policy name | Verifies whether a policy in the security configuration is applied to the test request. |
API ID | Verifies whether API registered in API definitions with specific API ID is applied to the test request. |
Test request
Test requests are constituents of test cases. They are combinations of:
- Request method, possible options are GET, HEAD, or POST.
- URL. A fully qualified URL of the resource to test.
- Customized headers. Customized request or Pragma headers to use during a test run.
- Keywords. Keywords are used for filtering test requests. After you create a test request with particular keywords, you can later find it quickly by entering the keywords into the Filter field above the main list.
See also: Create and add test cases
Variables and functions
Variables allow you to reuse specific values in test cases' input fields. They enable you to create test cases with complex metadata and run very specific tests. Variables can be assigned statically or dynamically. To extract the value from the test case response and assign it to a variable dynamically, you need to use functions.
Each variable consists of a name and an assigned value. You can add and edit variables as well as delete those created by users of your account. You can also view all test cases in which each variable is used.
Variables and Functions are also the names of the reference tabs in the right-side panel of the Create and add test cases window. You can use these tabs to create or edit a variable and check available functions, and test your own entries.
See also: Variables, Create and manage variables
Rules
Rules is the name of one of the reference tabs in the right-side panel of the Create and add test cases window. For test suites associated to a property version, the Rules tab contains the JSON representation of the entire property version’s rule tree. You can use it either as a reference for creating conditions or as a conditions' generator.
When you're creating a new condition or editing an existing one, Test Center highlights all behaviors set for your property in Property Manager. When you double-click the highlighted area, Test Center autogenerates a condition for a test case to use.
For test suites not associated to a property version, you can use this tab to associate them.
See also: Create and add test cases
Hostnames
Hostnames is the name of one of the reference tabs in the right-side panel of the Create and add test cases window. If your test suite is associated to a property version, the Hostnames tab contains the list of all the hostnames assigned to a property in Property Manager. You can use it as a reference when creating a test request or to add test cases variations.
See also: Create and add test cases
Comparative testing
Comparative testing compares the behavior of the controlled hostname, usually a live user-facing site, with the behavior of a changed hostname that's hosting your changed configuration settings. The results show the differences between the two behaviors, providing predictions about the behavior resulting from activating your configuration changes.
Comparative testing complements functional testing. It catches problems that weren’t anticipated and explicitly tested for in functional testing. This approach increases your confidence in the safety of the configuration changes.
You can use comparative testing to compare how a hostname behaves on the staging and the production environments, or on two production environments. For example, to compare the behavior of property changes on a test hostname which doesn’t carry live traffic with the hostname that does.
Here are the terms you may encounter when working with the comparative testing.
Test definition
Test definition is a comparative testing unit. It is an ordered list of comparative test cases with settings common to all included test cases.
Test definition's settings include:
- Hostname. It's the website you want to compare your configuration changes with.
- Associated configuration. It's the configuration associated to a hostname for which the test definition was created.
- IP versions. They're IP versions Test Center uses to make a request while running the test.
To run a test for a test definition, it needs to have test cases included. See also: Run a test for test definitions
Test cases
In comparative testing, a test case specifies the requested page and the comparisons to be performed for the page, as well as all embedded objects and redirect targets resulting from the page request. It is a part of a test definition. See also: Create a test definition
Test request
Test request is a constituent of a comparative test case. It is a combination of:
- URL to test. It's the URL of the resource you want to test. It needs to be fully qualified and have the same hostname as the test definition to which this test case belongs. Currently the only supported method is GET.
- Customized headers. Customized request or Pragma headers to use during a test run.
Comparisons
Comparisons are constituents of a comparative test case. You can select what you want Test Center to compare during the test run.
Comparison option | Description |
---|---|
Response code | When selected, Test Center compares the returned HTTP response codes but only their numerical part, not the text that might accompany it. For example, 404 not 404 Not Found. |
Response headers | When selected, Test Center compares the presence and values of headers in the corresponding HTTP responses. By default, every header in the response is compared (except for a predefined list of headers). |
Response headers to compare or ignore | When you provide an input, Test Center either compares or ignores the provided response headers. |
CP code | When selected, Test Center compares CP codes used on specified environments. CP codes are content provider codes and are used by Akamai Technologies, Inc. to identify traffic for billing and reporting purposes. |
Caching option | When selected, Test Center compares the caching option used on the specified environments. |
Cache key | When selected, Test Center compares the cache keys used on the specified environments. Note: If you use a private cache key or an origin response header to control the ability to cache content, you may get unreliable results. |
Metadata variables | When selected, Test Center compares the presence and values of every metadata variable used in the processing of the request. This includes both user-defined and system variables. Hidden variables are only included if specific instructions are included in the hostname's delivery configuration. Sensitive variables are never included. By default, every variable set is compared when the test response is processed (except for a predefined list of variables). |
Metadata variables to compare or ignore | When you provide an input, Test Center either compares or ignores the provided metadata variables. |
Origin server | When selected, Test Center compares the origin servers used for specified sites and environments. The origin server is the physical location associated with an IP address that your content is retrieved from. Note: The specific value compared will be incorrect if origin-server.dns.name.value is used to override the hostname that edge servers reference in DNS. |
Ignore embedded objects | When selected, Test Center performs the corresponding comparison for all requests triggered by the test case, ignoring requests for embedded objects and redirect targets. This only works if the objects are on the same hostname as the URL to test. |
Runs and results
Here are the terms you may encounter when reading test run results.
Test results
Test results for functional testing is a comparison of the expected and the actual value. It can be either Passed, Failed, or Inconclusive. For comparative testing, it is a list of differences, Diffs, found for the comparisons you set in test cases within test definitions for which the test was run. See also: Test run results
Passed test
In functional testing result, a Passed result means that the Expected outcome of the test run matches the Actual outcome.
Failed test
In functional testing result, a Fail result indicates that the Expected outcome of the test run was different from the Actual outcome, and the implemented changes need to be further investigated.
Inconclusive test
In functional testing result, an Inconclusive result indicates that Test Center was unable to determine the result. The possible causes are:
- A dynamically assigned variable could be resolved to more than one value.
- A forward request was needed to evaluate a condition, but there was none because the object was served from cache.
- Test Center failed to fetch the logs and therefore couldn't verify the condition.
- A forward request failed, because a new connection to the origin server coudn't be established.
- Not run - error. In this case you need to check the message on the interface to learn why the test case couldn't be run.
Diff
Diffs are results of comparative testing. They are differences detected by Test Center when comparing your current website configuration with the one you want to implement. They are neither good nor bad, some of them might indicate expected outcomes of the configuration change you are testing. It's up to you to decide whether the diff was expected or unexpected. The expected diff is what you want to achieve by implementing tested configuration changes. An unexpected diff might indicate a configuration error that, if activated on production, could lead to a denial of service or other production issues. Such unexpected diffs should be marked as not accepted and investigated.
For example, if you updated the time-to-live for a file name, then you would expect to find a caching option diff in the results for that file name. In that case, you accept this diff to signify that it indicates expected behavior and needs no further investigation.
Diff accepted
An accepted diff is a detected difference that was expected by configuration changes.
Diff unaccepted
An unaccepted diff is a detected difference that was unexpected by configuration changes and needs to be further investigated.
Present
It's an additional comparison done by Test Center. If it occurs in the test run results, it means that an embedded object on the site doesn't occur in one of the networks compared during the test run.
Updated about 1 year ago