QUESTIONING CONVENTIONS: ARE PRODUCT CONVENTIONS TRADING OFF THE USABILITY OF PRODUCTS FOR SHORT TERM USER SATISFACTION

. Mapping conventions are a key aspect of user centered design as they present users with familiar interactions in unfamiliar products. Conventions evolve over time and are slow to be adopted, requiring a high percentage of acceptance within a society, ensuring that conventions exhibit a sufficient level of usability. How ever this paper argues that while usability is a necessary condition for good interactions it is not a sufficient one. Therefore user centered design which accents individuals bias towards conventions my in fact be hindering the innovation of product interactions. This paper argues that a cognitive approach should be adopted in order understand and reassess product interactions. An experiment was carried out that demonstrates the influ ence that simple mappings can have on cognitive load. The results showed that basic mappings of the types that are found throughout product conventions can have a substantial impact on mental load and subsequently product interaction.


INTRODUCTION
User Centered Design, which puts the end user at the core of design, has been widely accepted as a vital to the success of products and systems (Maguire, 2001, Abras et al., 2004, Kurosu, 2007, Shackel, 2009. Poorly designed systems increase product learning curves and hinder product usability. User centered design aims to address such issues by considering the end user throughout the de-sign of a product. However, this paper argues that user centered design on its own is not a sufficient condition for the design of product interactions. Furthermore, many aspects of user centered design may in fact be hindering product design as users show a preference for familiar interactions; which is not an adequate criteria for determining optimal interactions. Abras et al., (2004) highlight the importance of user centered design not just as a series of processes and principles but as a philosophy which should applied to design. They accept that the degree to which end users are consider can vary vastly from product to product, however the critical concept is that they are considered. Norman Draper, 1986, Norman, 1988) a key author in the field, summarized the main principles of user centered design in his influential book The Design of Everyday Things: • Make it easy to determine what actions are possible at any moment. • Make things visible, including the conceptual model of the system, the alternative actions, and the results of actions. • Make it easy to evaluate the current state of the system. • Follow natural mappings between intentions and the required actions; between actions and the resulting effect; and between the information that is visible and the interpretation of the system state. Norman maintains that designs must effectively exploit that which comes naturally to individuals. In order to effectively accomplish this task Norman highlights the critical factor of designers selecting appropriate interactions for operations. However, importantly Norman concedes often it is technological constraints, as opposed to designs, that is the critical factor which determines interactions. Norman refers to inappropriately selected interactions such as technologically driven interactions as mapping problems. He reveals that mapping problems are abundant throughout products, providing examples such as the ambiguousness of analogue clocks or complexity associated with vast arrays of inputs such as mixing desks. He also raises the issue of mapping problems during everyday product interactions, offering the example of faucets: • Which faucet controls the hot, which the cold? • What do you do to the faucet to make it increase or decrease the water flow? • How do you determine if the volume or temperature is correct? Norman suggested the solution to the mapping problems are cultural conventions. Conventions, as described by (Norman, 1999) are learned constraints that exclude certain behaviours while promoting others. Taking the tap example, convention states that all left taps should be cold and all right taps should be hot. Convention states that screws should tighten clockwise and loosen anticlockwise; however, rules of convention are not always followed for example vertical taps in shower rooms violating the left/right convention.
Conventions must not simply be viewed as, and are distinct from, physical constraints. A mouse arrow being located within the confines of a monitor is an example of a physical constraint, the user is simply unable to move the arrow outwith the limits of the screen. A scroll bar at the side of a screen is an example of a convention, the while the scroll bar may be limited to vertical movement the user is not. In order to operate the bar the user must learn to hold the mouse button down on the bar while moving the mouse vertically. In the same regard even although the physical location and movements of faucets are fixed, a user must learn to twist the tap in order to operate it correctly.
Norman argues that truly universal conventions are needed to address mapping problems for if conventions were truly universal then an operation would only need to be learned once, then that knowledge could be applied to any similar product. Norman further explains that conventions are not simply fixed operations, they are operations that evolve over time and are slow to be adopted, requiring a high percent of acceptance within a society. Critically, Norman also reveals that once a convention has been adopted by society they are extremely difficult to overturn.
Consequently, given that numerous conventions predate user centered design and were born out of product functionality, it may be the case that mapping problems are more substantial than Norman envisaged as many contemporary conventions are themselves technologically driven interactions. Subsequently, the core user centered principle of user centered design and the persistency of conventions may now be hindering the innovation of interactions as individuals could be showing bias towards potentially suboptimum conventions; as not having to relearn a convention does not necessitate that the convention itself is a natural interaction. This raises the question of whether or not product designers should rethink the impact of conventions and consider whether conventions, even if they do result in user satisfaction, may in fact be hindering the progression of product design. Are designers trading off optimization and innovation for short term user satisfaction? While the many aspects of the theory of conventions are sound there still remains the underlying problem of what interactions should be standardized and why, i.e. what are the key elements that make an interaction/mapping natural? These concerns are shared by (Sharples et al., 2002) who argue for a cognitive approach to design stating that while usability is necessary condition for good design it is not a sufficient one. Indeed Norman himself argued for the need of a deeper understanding cognitive science in design emphasizing the appalling lack of knowledge designers have in regard to cognitive science ). Yet thirty years on and there is a distinct lack of research regarding the selection and suitability of product interactions at a fundamental level. Furthermore, the issue of potentially inefficient conventions has many parallels with issues faced by cognitive scientists in the field instructional design.
As explained by (Sweller, 1994) Instructional designers have rejected persisting with traditional learning and teaching practices in favor of developing new ones based on a deeper understanding of the cognition of the learner. It is the contention of this paper that learning how to operate a product is a learning processes akin to any other. Subsequently, The aim of this paper is to assess interactions mappings from a cognitive perspective in order to challenge, or verify, interaction conventions.

Instructional design
Instructional design is the process of designing instructional experiences, such as e-learning, with the goal of making the attainment of knowledge, and skill, more functional and appealing. Attention has been brought to the parallel between instructional and product designers as they both share the common goals of developing solutions that are effective, efficient, and appealing.
The availability of working memory has long been identified as a critical aspect of instructional design. Cognitive load theory dictates that incorrect instructional procedures raise cognitive load through imposing needless additional workloads on the available working memory (Mousavi et al., 1995, Sweller, 1988, Sweller, 1994, Sweller et al., 1998, Sweller et al., 2011. Cognitive load theory is based on the following understanding of the brain's cognitive architecture: • The brain has a finite amount of working memory which is only capable of holding and processing a small amount of information at any given time. • The brain has an abundance of long term memory which is for all intents and purposes infinite in size. • Schema construction is a principal learning mechanism. Sweller, (1994) explains that schemas are cognitive structures which consist of organized elements of information and their interrelationships. The brain utilizes schemas to organize current knowledge which provides a basis for interpreting new information. Schema are stored in the long term memory and allow individuals to recall groups of information as individual entities. This allows the brain to process multiple elements as a single unit, reducing cognitive workload and therefore freeing working memory. Ultimately the degree of available working memory is the defining factor regarding the ease of schema generation.
Conventions can thus be understood as pre-existing schemas, i.e. schemas that have already been committed to memory by certain (high) percentage of society, or interactions that are iteratively close to those pre-existing schemas. Therefore, in order for conventions to be optimum they should be based on schemas that that do not impose irrelevant or unrelated cognitive loads on a user; however, drawing parallels to instructional design would suggest it is unlikely that is the case. Paas et al., (2003) maintain that many conventional instructional procedures impose irrelevant or unrelated cognitive loads on the learner due to the fact that they were created without contemplation, or understanding, of cognitive workload. Subsequently, there is now a vast area of research in regard to instructional design aimed at applying processes and principles in order to reduce extraneous cognitive load whilst learning because extraneous and task specific cognitive loads are additive. Removing irrelevant workloads frees up cognitive space which can then be utilized to complete the instructional task. Given that cognitive load theory is a relatively recent development, it seems fair to conclude that many of today's conventions were also created without, or use conventions that predate, an understanding of cognitive workload. Therefore, their fundament control systems may be imposing extraneous cognitive loads on individuals whilst they are operating them. Take the example of learning to operate a car as there is a vast sample size for reaching a basic standard of operation: While individuals, who are capable of driving, may feel that cars are intuitive to operate the evidence suggests the contrary. The average individual requires 47 hours of lessons and 22 additional hours of practice to pass their driving (AA Driving School, 2014). This is nearly the same amount of time, 70 hours, required for an individual to acquire a private pilot's license so it is clearly not a trivial task (Let's Go Flying, 2014). Once the operation of the product has been learnt and committed to memory it would seem that the brain views the task with indifference. However, that does not mean that every task, or indeed, convention need impose the same cognitive load on a individual, or that they do not impose irrelevant or unrelated cognitive loads on a user. Cognitive load (Paas, 1992, Paas et al., 2003, Paas et al., 2004, Sweller et al., 1998 spans multiple dimensions and represents the overall load imposed on the cognitive system during the undertaking of a task. The factors effecting cognitive load can be divided into two categories: causal factors and assessment factors; where causal factors are factors that influence the cognitive load and assessment factors are influenced by the load. The causal factors encompass variables such as subject and environmental characteristics and there subsequent interactions. Subject characteristics are a relatively stable characteristics which relate to the individual carrying out the task, for example cognitive capabilities and experience. Environment characteristics relates to elements such as room temperature and background noise. Task characteristics include the type of task, the associated reward, and task constraints. Task interactions can also be influenced by unpredictable factors such as motivation and performance spikes. Cognitive load (Fig 1) can be conceived through grouping variables into the following three dimensions: • Mental load -Mental load is the total load imposed by the environment and the task. Mental load is a task specific constant which is unrelated to individual's abilities or characteristics.

• Mental effort -Mental effort refers
to the amount of cognitive processing an individual undertakes while carrying out a task. Mental effort is subject to the above mentioned causal factors.
• Controlled processing -Controlled processing is processing that is consciously controlled by the brain. For example when one has to concentrate on a task and they are consciously aware of thoughtful effort. • Automated processing -Tasks that are automated by the brain and carried out without mental effort. As individuals become accustomed to a task controlled processing can become automated processing, allowing the user to carry out the task with a reduced mental effort.
Conventions could be viewed as a type of automated task. • Performance -Performance is an expression of the success of an individual in regard to the goal of the task. Performance is a reflection of the mental load, mental effort and the learner, therefore performance is subject the causal characteristics.  explicate that mental load, mental effort, and performance are all components of cognitive load, where mental load is a reflection of the task only and mental effort and performance are influenced by all the causal factors. Mental load is a construct of the task environment and task interactions and is consistent to a task.
Mental effort reflects the total cognitive resources that are actually applied to task completion, hence mental effort is the critical aspect controlling task completion. Indeed the degree of mental effort required whilst undertaking a task is considered to be the nucleus of cognitive load. Consequently, mental effort can be utilized to provide an effective measurement of cognitive load.
Take the example of someone who already knows how to cycle learning to operate a motorcycle. Several of the controls and interactions involved in operating a motorcycle overlap with the controls and interactions which are used to operate a bicycle, for example steering and braking. However, other interactions are unique to the motorcycle, for example changing the gears and signaling. In order to understand how the familiarity of interactions can affect the learning process the interaction types can be traced through the cognitive schema diagram (Fig 2): 1. Box one shows some of the operation aspects of the task/user interactions. The contents of the box refer to the interactions required to carry out the named task, for example acceleration refers to turning the acceleration on the handle to accelerate the motorcycle. The three causal factors (task Figure 2 -Interactions traced through the Cognitive Load Schematic environment, task interactions, and learner) combine to influence the overall cognitive load. The task environment and the task/ user interactions combine to produce the total mental load of a task, while the leaner characteristics influence the mental effort (The mental effort is interconnected with controlled and automated processing). The task environment and leaner variables have not been examined throughout the diagram so as to trace just the just the physical interactions.
2. Box two shows the interaction aspects of the task that the user is already familiar with through operating a bicycle (automated processing). The user already knows how to steer and brake the motorbike as they are direct emulations of riding a bicycle. Consequently, the schemas for such actions already exist within the user's brain (conventions). As previously discussed the brain does not have to apply any cognitive resources to automated processing.
3. Box three shows the aspects of the task that the user is not familiar with and therefore has to apply cognitive resources to carry out (controlled processing). The degree of familiarity may vary, for example learning to operate the ignition switch compared to learning to change gears. Under such circum-stances the user may be able to alter existing schema or may have to construct totally new schema. As the user becomes familiar with the product they start to form schema to govern the interactions shown in box three. The end result of the process is the controlled processing becoming automated processing, i.e. those aspects moving from box three to box two. Sweller et al., (1998) further explain that cognitive load theory differentiates between three types of cognitive load: intrinsic cognitive load, germane cognitive load, and extraneous cognitive load. All three cognitive loads have the potential to be active simultaneously.
Intrinsic cognitive load relates to the immanent difficultly of the subject under instruction, for example the difficultly of addition in comparison to Newtonian mechanics. The inherent difficultly of such tasks cannot be altered by the instructor; however the tasks can be broken down into schema which can be taught then combined to provide an understanding of the problem as a whole.
Extraneous cognitive load is a load that is not essential for undertaking or learning a task. Extraneous cognitive loads can be imposed by such things as bad teaching practices, substandard problem solving techniques or poorly designed and inadequate environments. For example, extraneous cognitive load could arise when an instructor is describing a product to a student. A product could be described using either visual mediums, verbal mediums, or a combination of both. If the instructor selected to describe the appearance of a product using only the verbal medium clearly that would be a far less effective method than simply showing the student a picture. The verbal method would load the student with irrelevant and unclear information; this redundant cognitive load would be classified as extraneous. Due to the fact that the brain has limited cognitive resources cognitive load theory dictates that extraneous cognitive loads must be reduced in order to optimize the free cognitive space for intrinsic and germane cognitive loads.
Germane cognitive load is the load which is devoted to the processing, formulation, and automation of schemas. Germane load is considered to be a constant which cannot be directly influenced by an instructor. However, Merrienboer, Sweller and Pass consider reducing extraneous load and freeing up the available cognitive load for the germane load to be a critical aspect of cognitive load theory. Indeed the development of schema and the movement of load from controlled processing to automated processing is the very (IJCRSEE) International Journal of Cognitive Research in Science, Engineering and Education Vol. 3, No.2, 2015. www.ijcrsee.com 52 basis of learning. If learning scenarios can be effectively manipulated in the described manner the associated learning curve will be reduced.
Learning to operate products is a learning process like any other and there are instructional situations during such learning, for example driving lessons, where cognitive load theory could be applied in a traditional sense. However, most product operations are not introduced under the guidance of a tutor, and even if products are leant under instruction the physical design of the product remains fixed. The physical design, and interaction mappings, of a product controls the manner in which the users interact with a product while carrying out a product related task; and interactions have already been highlighted by Pass & Merrienboer as a causal factor. Not all interactions are created equally, for example, analogue control offers a wider degree of freedom than digital.
Given the vast array of controls, inputs, and functionality of products there is obviously a disparity in the complexity and learnability of products. Cognitive load theory affirms that an instructor can reduce learning curves thought proper teaching practices, problem solving techniques and adequate environments. It is the position of this paper that the same principles can be applied to product design, where the designer takes the roll of the instructor. This presents the designer with an interesting quandary, for clearly utilizing conventions is going to reduce cognitive load as they can make use to pre existing schema and auto mated processing. However, by doing so they may be utilizing schema that impose irrelevant cognitive loads on the user.

MATERIALS AND METHODS
An experiment was designed to explore the relationship between basic mappings i.e. 2D inputs, resulting actions, and learnability. The purpose of the experiment was to investigate the effect that even the most basic inputs mappings can have on cognitive load, and develop an understanding of how the brain reacts to basic 2D mappings.

A. Subjects
The participants were 31 adults (23 male, 8 female) from the following age groups.
As an incentive to concentrate on the task the individual with the best high load performance (time wise) received a £20 book voucher. While this added an element of pressure, it was felt that providing an incentive was important to motivate participants and ensure that they were maximizing their cognitive effort.

B. Environment
The environment consisted of a simple 2D computer game, Pac Man (Fig 3). The user controlled the navigation of the Pac Man though a 2D maze like environment. The goal of the game was to navigate the maze and collect pellets. Traditional Pac Man includes ghosts which were removed for the experiment so as not add an additional cognitive load. The maze did have a start and finish point, consequently the goal of the game was not simply maze navigation, removing the impact of route memorizing. The game was controlled using 4 simple inputs, the arrow keys.

C. Instruction
General instruction regarding goal, and the controls, of the game was demonstrated to the users prior to the experiment. The users were given up to five minutes to get accustomed to the controls. Completion of a single level under normal conditions takes approximately one and a half minutes.

D. Design
The experiment consisted of three scenarios aimed at adapting the control inputs in order to change the level of mental effort required to complete the task. www.ijcrsee.com 53

Figure 4. Control inputs
Scenario One: In the first scenario the users were asked to navigate the maze using the normal input controls (Fig 4). The users were given three attempts to complete the game; the average measurements were then recorded. The initial scenario was based on the premise that users will be familiar with controls of scenario one, the purpose of giving the users three attempts was to reduce the influence of factors outlined by Paas et al, such as performance spikes and dips. Performance spikes refer to situations where an individual generates an untypically good result, for example a poor player getting a strike in ten pin bowling. Performance dips refer to good player generating an untypically poor result.
Scenario Two: In second scenario the users were asked to complete the game five times using reversed controls. The users were given no time to learn the new controls as aim was to capture the learning curve as part of the experiment. The aim of scenario two was to investigate the impact of changing the controls across an axis/dimension. Scenario Three: In third scenario the users were asked to complete the game five times using controls which have been rotated ninety degrees. Again the users were given no time to get used to the controls in order to capture the learning curve. The aim of scenario three was to investigate the impact of mixing the controls and axis/dimensions.

E. Data Capture
After each experiment the following data was captured: The length of time take to complete the course: The length of time taken to complete the course is a direct measurement of performance. As explained by (Brunken et al., 2003), currently the most utilized objective method of examining cognitive load is performance based analysis.
The users' perception of task difficulty: The users were asked to provide a subjective measurement of task difficulty (mental effort) after every completed level. The measurement will consist of the users scoring the tasks on perceived difficulty on a scale of 1-7, ranging from exceptionally easy to exceptionally difficult. (Ayres, 2006) reveals that such an approach can produce highly reliable results where errors and performance are correlated to perceived complexity.

F. Procedure
All experiments took place with the participant in solitude so to as to avoid any task environment influences; instruction was provided regarding the controls only, then the instructor monitored the experiment from a distance. The participants were asked not to converse with the researcher unless it was unavoidable.
The users were asked to carry out scenario one and the computer recorded the total time taken to complete the each task. On completion of scenario one the average time was recorded to serve as a bench mark for scenario two and three. The users were also asked to complete the questionnaire for scenario one; the users were not aware of their times throughout the experiment to avoid the time serving as means for deducing difficultly. The approach of comparing the users results from the preceding scenarios to scenario one removes the any potential for individual skill levels to influence the data, i.e. the users were competing with themselves therefore the skill factor was constant.
The users then completed scenarios two and three. Again, the only information the users were provided with prior to being asked to complete the scenario was the inputs. The computer recorded the times taken to complete every level and the users were asked to complete the questionnaire after every level.

RESULTS
The results of the study showed that the brain can cope with input/output changes on the same dimensions (for example swapping / reversing the actions on the X or Y axis) but struggles to cope with input changes across different dimensions (for example swapping the actions of the X and Y axis). That is, as a group the participants by then end of scenario one had reached a similar average task completion time to the benchmark, just 7.5 seconds (or 10.7%) slower with a merging perceived difficultly; 81% of the participants rating the benchmark as very easy or easy, compared to 75% at the end of scenario one. Whereas, by the end of scenario two the average completion time was 73 seconds or (97% (IJCRSEE) International Journal of Cognitive Research in Science, Engineering and Education Vol. 3, No.2, 2015. www.ijcrsee.com 54 slower) with a 77% of the participants still rating the controls as hard or very hard. (For ease of use the times are displayed in decimal format in the visuals in the results section)

A. Normal Distribution
Based on the null hypothesis that there is no relationship between the controls and performance times, the standard score (z-score) can be calculated for each of task completion time. The z-score can then be cross referenced to the standard normal distribution table in order to calculate the probability that the modified controls completion times were due to chance, the results were as follows: Table 1. Results and probabilities scenario on Table 2. Results and probabilities scenario two

Scenario One:
Based on a P value of 0.01, in both cases the null hypothesis that there is no relationship between the controls and performance times can be rejected. The average Times in both input change scenarios (Fig 5) generated inverse relationships where the time was inversely proportional to the number of attempts. In both cases similar relationships can be observed in the standard deviations and variances (Fig 6 &  7). The subjective measurements of task difficulty (mental effort) also generated inverse re-lationships where the perceived difficulty was also inversely proportional to the number of attempts. As demonstrated Ayres (2006) these results can be considered highly reliable as performance is correlated to perceived complexity.

Scenario Two:
The average time results generated by scenario two shows an inverse relationship where the difference in the time intervals decreases throughout the experiment, i.e. the time improvement between attempt one and two is greater than the time improvement between attempt two and three, with the users reporting correlating decrease in mental effort. In regard to cognitive load theory this is exactly what one would expect to observe; where the brain is altering the original schema and transferring the operations from controlled to automated processing. However, the same trend is not observed in the standard deviation and variance where it stays relatively static between attempt one and two, then decreases. After a deeper investigation of the results this initial plateau can be explained through the causal factor the learner; in this case learner skill and the extra time some of users took to adapt to the controls: 33% of the users failed to decrease their time by 10% or more between attempt one and attempt two, compared to only 6% of the users failing to drop their time by 10% or more between attempt two and attempt three. This suggest that some of the participants took a longer time to adapt to the new controls than others. A similar trend can be observed at the end of the experiment between attempt four and attempt five. Again this can be explained through learner skill, where the results revealed that 16% of the users did not manage to get within 20% of their benchmark time; in contrast 52% of the participants managed to get a time within 10% of their benchmark time.

Scenario Three:
The average times generated by scenario three shows a substantial increase over scenario two with the initial average time increasing by 206.67%, then following the same inverse relationship as scenario two where difference in the time intervals decrease throughout the experiment. A plateau in the standard deviation and variance can also be observed in scenario three between attempt 4 and attempt 5, again this can be explained by variation in participant skill levels. Many of the users struggled during scenario three with only 42% of the users managing to record a time under 2.30 and the times varying during the fifth attempt from a best time of 1.32 to a worst time of 3.48. Furthermore, 20% of the users did not record their best time on last attempt and 61% of the users recorded a jump in time during succeeding attempts some point throughout the experiment. Jumps in time were also recorded in scenario one however invariably they occurred once the individuals were recording fast performance times and therefore can be explained through path section as opposed to learnability and mental load.

B. Perceived Difficulty
The perceived difficulty (mental load) of the three scenarios was highly correlated to the average performance times. However, although the average times vs. the perceived difficult shows a straight forward relationship with matching inverses relationships to the performance times, a more in-depth analysis of the data reveals some interesting results.  Engineering and Education Vol. 3, No.2, 2015. www.ijcrsee.com 56 As previously mentioned the participants were not shown their time during the experiment to avoid providing them with an objective measurement by which they could gauge difficulty.
Of the 61% of users that recorded a jump in time during scenario three 37.5% of them simultaneously recorded a decrease in difficulty. A similar trend can be observed throughout scenario one however it does not appear to be for the same reason. In scenario one the participant times rapidly reached the bench mark time then started to level out. Nonetheless, the users still reported a decrease in perceived complexity. This can be explained through cognitive load theory as individuals can add additional mental effort in order to compensate for an increase in mental load. As the individual gets accustomed to a task the performance level will stay the same but the required mental effort will decrease. The critical aspect being performance time; if there is room for improvement then the mental load will stay relatively high, as seen in attempt one & two (Fig 8 & 9) while the performance increases. Only when the room for improvement diminishes does the performance stay the same while the mental effort drops off. In this case the recorded drop in mental effort would seem to be a result of movement from controlled processing to automated processing, which is supported by the fact that the mental effort has reduced to the easier rated side of the scale.
The same explanation cannot adequately explain the results generated in scenario three. Firstly, the performance times were not even close to the benchmark time, nor were they leveling out. Secondly, the perceived difficulty was not dropping off, in fact it was still firmly within the very hard/hard range. A possible explanation for this is a perception of learning; the very act of practicing the inputs generates a perception that the individual must be improving, even if they are not. This explanation would also explain the difference between perception and time when the two scenarios are directly compared. For attempt five in scenario three has an average completion time of 2.28 with 75% of the users rating it hard or normal. Whereas, attempt 2 in scenario two has a similar perception rating with 77.5% of the users rating it normal or harder yet is has a vastly superior performance time of 1.42. However this explanation is purely hypothetical and further research into this phenomenon is required.

DISCUSSIONS
The findings of this study demonstrate that even at a fundamental level the selection of inputs and resulting outputs, i.e. interaction mappings, can have substantial influence on mental effort and learnability. Consequently it seems fair to conclude that complex mapping have the potential to impact cognitive load. Furthermore, the results demonstrate the power that pre-existing schemas, of which conventions are a subset, can have on learning curves and product operation, re-raising the original question; should designers be designing for pre-existing schema/conventions or do designers need to re-evaluate the role of conventions?
Approaching the query form a user centered design perspective provides a clear and definitive answer; designers should design toward pre-existing schema. This lets designers ensure that new users who are unfamiliar with their product are maximizing the use of automated processing, which in turn reduces the use of controlled processing. The net result is a lower mental workload and improved product learnability and usability, which in turn reduces the product risk; for as explained by (Kemp and Van Gelderen, 1996) users perception of ease of use is a critical aspect of a users first impression of product usability.
However challenging the user centered design perspective raises many of the previously highlighted objections. For example, while there is little doubt that that conventions do result in a lower mental load, at least initially, while learning. There is nothing to suggest that the lower mental load will sustain in the long term, and if instructional design does offer a parallel subject area then it would seem that alternative interactions could offer a lower long term cognitive load. Indeed, in certain cases conventions may be exploiting short term cognitive gain to the detriment of long term usability. Furthermore, the perceived cognitive gain may not actually translate into improved performance; as the results of the test revealed, the perception of difficulty did not necessarily coincide with performance.
There is however, in this sense, a key fundamental difference between learning to operate a product and instructional design, is that instructional designers have a captive audience. Instructional designers have the luxury of time to implement new instructional procedures whereas dissatisfied users can simply cease to interact with a product, and as detailed by (Tuch et al., 2012)  Taking the above into account, it is clear to see why conventions are retained and potential issues with rejecting convections. However, when considering the evolution of conventions driven by the evolution of products a strong case can be made need to adequately scrutinize conventions. Products have evolved and conventions with them, but in certain cases evolving products have changed the fundamental nature of certain interactions. Consider tablets, according to (Emarketer, 2015) more than half of the population of the UK now uses hand held tablets. Many of the conventions used by tablets are directly descended from desktop computers, for example, web browsers. However, the tablet has fundamentally altered the manner in which the users employ those conventions.
Desktop computers utilize bi-manual interaction and have a precise input device on the form of a mouse. Tablets of the other hand restrict bimanual interaction as one of the hands is immobilized through having to hold the device; a problem which is having a detrimental impact on tablet operation (Wagner et al., 2012, Trudeau et al., 2013. Consequently, due to the adoption and alteration of pre-existing conventions tablet users are stuck with conventions that were not truly designed for the interactions they are carrying out. Obvious examples being: The size of the web browsing buttons and icons that were designed for mouse interactions; consider the relative size of a finger in comparison to a mouse pointer; Users having to type on virtual keyboards with one hand when they are designed for bimanual interaction.
Even talking into account the above issues, if instruction design can serve as an area from which theory can be borrowed then there remains an even more critical objection; that is, many of today's conventions predate mod-ern design practices and were adopted with contemplation of cognitive load. Given that situation it would seem highly unlikely that all conventions are optimal in regard cognitive load. However that does not imply that conventions have no role to play in product design and that all conventions should be summarily dismissed, but instead that conventions should be adequate scrutinized and not simply be accepted, or act as a justification for accepting, interactions. Especially as products, which are becoming more complex and novel, are borrowing conventions from other products.

CONCLUSIONS
This study carried out an experiment that demonstrates the impact that mappings, of the types that are found throughout product conventions, can have on cognitive load, even at the most basic level. Consequently, it is reasonable to conclude that complex mapping have at least the same potential to impact cognitive load. User centered design, which puts the end user at the core of design, has been widely accepted as a vital to the success of products and systems. User centered design proposes that interaction mappings should conform to cultural conventions, as this presents users who are unfamiliar with a product with familiar interactions.
However, familiarity does not imply optimum usability. This is an issue that has been highlighted by instructional designers who have rejected persisting with traditional learning and teaching practices in favour of developing new ones based on a deeper understanding of the cognition of the learner. Attention has been brought to the parallel of instructional and product designers as they both share the common goals of developing solutions that are effective, efficient, and appealing. This paper argued that designers should adopt a similar approach in order to challenge, or at the very least verify, mapping conventions. Ultimately, as products evolve down innovative and technologically advanced routes, there is a case for rejecting conventions in favour of interactions aimed at the long term usability of a product.