Guest User

Untitled

a guest
Jun 22nd, 2018
102
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
text 3.52 KB | None | 0 0
  1. 3. Evaluation
  2.  
  3. This third phase attempts to analyse promising scenarios that resulted from the modelling phase in more detail. The fundamental technological idea of a concept tends to remain unchanged across scenarios. Therefore, the evaluation mainly revolves around the advantages and disadvantages during development and implementation for the operating players, and of course the improvements that customers can ultimately benefit from. In order to boil down the evaluation to a manageable result, alternatives are rated with a scorecard. The scorecard considers quality aspects for the operating players, aswell as factors that are relevant to the customer experience:
  4.  
  5. Performance indicators for operating players
  6.  
  7. * Increase in turnover: does the scenario yield a gain in sold products or services?
  8. * Increase in customer base: does the scenario increase the amount of users of a product or service, without necessarily increasing sales? (i.e. more usage of a data plan a customer has anyways)
  9. * Low implementation costs: what are the relative costs in terms of infrastructure, human resources and implementation time?
  10. * Efficient resource and process usage: does the scenario utilize existing processes and resources, in contrast to extensive restructuring and introduction of new entities?
  11. * Image gain: can players expect a positive image transfer from the scenario, either as part of the Blended Shopping service itself or through co-branding between partners?
  12. * Increase in customer engagement: does the service increase chances of long-term customer loyality towards the operating companies or brands?
  13.  
  14.  
  15. Performance indicators for customer service
  16.  
  17. * Saved time: do customers save time due to special aspects of the constellation?
  18. * Saved money: does the scenario provide a saving possibility?
  19. * Increasaed process convenience: are there features or shortcuts that shorten or simplify the shopping process?
  20. * High value of added information: is the additional information of higher value compared to other constellations?
  21. * High technical quality of service: is the application well-developed, usable and up to the current standards of technology? Is the level of expertise in support sufficient?
  22.  
  23.  
  24.  
  25. This particular set is not meant to display the entirety of factors relevant to every possible Blended Shopping scenario. It attempts to touch on the most relevant areas with equal attention to operation and customer service. The scorecard provides a framework to quickly assemble a list of performance indicators for a given situation, in order to enable a rapid evaluation of multiple scenario alternatives.
  26.  
  27. In order to incorporate their individual importance, the performance indicators are weighted, with all weightings adding up to 1. Each indicator is then rated on a scale from 1 to 5 (5 being the best), depending on how good a scenario fulfils that aspect. The weighted ratings (weighting * rating) add up to the final score. This approach is adapted from the method used in Strategic Factor Analysis Summary (IFAS, EFAS, SFAS) [HuWh2000].
  28.  
  29. When rating scenario options, the score does not necessarily yield a simple "better or worse" result. Some alternatives follow different strategic approaches which are not entirely comparable with each other. A common example is the question of branding in cooperative scenarios. A department store chain could discover a MNO as effective partner for a concept, but the image transfer that occurs might be undesired or not in line with the existing marketing strategy.
Add Comment
Please, Sign In to add comment