Advertisement
Guest User

MyPaste

a guest
Apr 1st, 2020
383
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
text 10.55 KB | None | 0 0
  1. We are also in the process of opening our first caching data center in Asia to improve the performance for users in that region. While bandwidth is an important factor, the bad performance can also come from high latency. Since we’re limited by the speed of light and the physical distance between users and our servers, we have to get closer to them. The decision to open this new data center is directly derived from performance data collected with probes in those regions. With this real world data, our Technical Operations team was able to identify the best physical location possible to achieve maximum impact. This new location is expected to open in late 2017/early 2018 and we’ve already set up additional performance metric measurements focused on Asia in order to assess the before/after impact of this big change.
  2.  
  3. As for past achievements, it’s best to look at the trend of our core performance metrics over long periods of time. While we sometimes get big wins with big visible changes, such as the effect transitioning to HHVM has had on article save time (cutting it in half), hundreds of small performance improvements over a long period of time have had an even bigger impact. While HHVM brought us from an average of 6 seconds to save an article edit to 3 seconds in 2014, we have since been able to reduce the average edit save time to less than 900 milliseconds. This is the result of a constant focus on performance at the Foundation. This culture, applied to many small individual engineering decisions, leads to tremendous performance improvements over time.
  4.  
  5. The long-term impact we’ve had on front end performance is less clear. Last year we fixed a number of issues that were previously skewing those metrics and we’re in the process of overhauling our real-user metrics. We know it hasn’t worsened, but we can’t claim it has improved. But maintaining current performance is a challenge in itself, as the wikis are more and more feature-rich. We work with all teams releasing new software, as well as volunteers, to ensure that feature releases don’t impact performance negatively. This is critical for users with bad internet connections, which would be disproportionately affected by performance regressions. So far, the dozens of identified performance regressions – that are often the result of unforeseeable side effects – which the Performance Team has caught since its inception have all been fixed quickly.
  6.  
  7. Measuring the performance as it is experienced by users and interpreting the data correctly is a significant challenge in itself, which you need to have been constantly good at for a long period of time in order to be able to claim performance gains with absolute certainty. This is even more true for users with bad internet access. The classic example being that for some users their internet connection is so bad that a given request ends before completion, never reaching the stage where it can send us data about its performance. In essence, the worst experience can’t be measured. And when we improve performance to the point that those users can start having a working experience, it might still be a slow one, and make it look like performance has worsened on average (because we start getting metrics for these users, but they’re slower than average). When in practice those users went from having an experience so bad it didn’t work at all, to having a slow, albeit working, experience, which was obviously an improvement. Thankfully web performance is a very active field and the companies developing browsers are constantly releasing new performance-related APIs, which we leverage whenever we can to understand performance better.
  8.  
  9. ———
  10.  
  11. If organizations wanted to make their sites accessible for low-bandwidth users, where do you recommend they begin?
  12. Anne Gomez: A lot of people who are cost-conscious with their data use proxy browsers such as UC and Opera Mini. These browsers strip out most of the data-heavy content and features, including removing JavaScript, which is essential for most modern sites to operate. Without getting too deep into the technical ways that they do this, it’s important for brands with a global presence to make sure that their sites work well in these browsers. Even if the functionality is limited, relative to the full site, users of these browsers shouldn’t have a broken experience.
  13.  
  14. Jorge Vargas: Having a no-pictures version of Wikipedia was something we had with Wikipedia Zero in the initial stages. I think it could be an accessible way to get to Wikipedia for low-bandwidth users—perhaps involving an opt-in or out option. That said, I’m not sure there would be a huge difference, as articles are usually heavier on the text side. There are definitely pros and cons for this.
  15.  
  16. Olga Vasileva: As Anne pointed out, we implemented lazy loading of images on the mobile website. This means that images load as the user scrolls down the page. If a user only views the initial sections of the page – they do not download the data for images below the fold. For many websites where users are not likely to read the entire page – lazily loading images or other content is an efficient way of saving data for their users.
  17.  
  18. Gilles Dubuc: We have to measure the performance as experienced by those low-bandwidth users accurately first. We’ve seen examples in the industry where things started with a good intent (eg. making a text-only version of the website), but the execution was poor, with the “light” website loading megabytes of unnecessary JavaScript libraries because that’s what their developers were used to work with.
  19.  
  20. Developing a website focused on low-bandwidth users requires a drastically different approach than developing a website focused on being feature-rich. Not that those two objective are incompatible, but performance/lightness is difficult to achieve after the fact by retrofitting an existing website. It has to be a core concern from day one and requires great discipline that goes beyond just getting things to work. This is why you usually see that those projects are separate websites, because it’s easier to achieve when starting from scratch. The ideal is having a single website that does what’s best for low-bandwidth users by adapting the experience for them, of course. And much like accessibility, improving the experience for low-bandwidth conditions usually makes the experience more pleasant for users with high bandwidth internet as well.
  21.  
  22. ———
  23.  
  24. I realize we’re talking about websites, but there are also ways to think about USSD and SMS. How have you thought about those platforms when thinking about conveying information to the end user?
  25. Jack Rabah: We are currently exploring a partnership to offer free Wikipedia via SMS and voice with a global mobile service company. This collaboration will deliver Wikipedia content to MNO subscribers, free of charge, through the interactive SMS and voice capabilities of their platform. We are exploring this as a pilot in order to learn more about how well this works in practice. From the lessons we learn from this pilot, we hope to eventually make this service widely available to reach the billions of people who have mobile phones, but cannot afford access to the internet.
  26.  
  27. Jorge Vargas: USSD is an interesting way to bring information to the end user. It works with really low-bandwidth, and there is no need for a smartphone. The problem is the strong limits to be able to obtain text (just two or three lines are displayed), there’s a timeout that requires reconnecting again after certain time, and the UX is not very friendly. Facebook and Twitter also have USSD platforms – it’s a very small audience, but a very specific one that could be served.
  28.  
  29. ———
  30.  
  31. What about preloading content on mobile? What kinds of things can be done technically?
  32. Jorge Vargas: We can preload the Wikipedia app on smartphones and tablets. With the app, we can also preload a file with an offline version of a Wikipedia (ZIM files, built by Kiwix). Ideally, we would be able to preload curated “packages” or “collections”, but this content curation is yet to be explored. WIdeally we could have packages with information on response to natural disasters, for example. The only specific ZIM files that are more content specific are the ones for Wikimed.
  33.  
  34. Anne Gomez: To build on what Jorge said, we’re learning from our initial research around that feature that people are looking for small, specific content packages to be on their devices, which is something we aren’t currently able to offer. You can see that research linked here under “Research findings.” We’d love to be able to create and offer smaller, more focused packages of files based on a topic or area of interest, in any language and are investigating what that might look like and how we could support our readers and editing community in building exactly what they need.
  35.  
  36. ———
  37.  
  38. I know you’re also investigating the possibility of changes to the mobile website to support intermittent connection. Can you talk a little bit about how to support users with intermittent connections?
  39. Anne Gomez: Users with intermittent connections exist all over the world. Even in the most connected cities, there are still gaps in coverage. It’s really frustrating when you’re browsing the web waiting for some part of the site to load, the connection drops, and the entire page disappears. Beyond that, we know from our research that some people who are cost-conscious about their data usage open browser tabs when they’re on wifi to read later when they don’t want to pay for internet. We want to support those people.
  40.  
  41. Olga Vasileva: We have recently begun a project that will allow users to access the mobile website even during intermittent connections. For example, if their connection is spotty or they leave a wifi zone, they will still be able to read articles within the website – they can hit the back button and access the articles they read before or, tentatively, save articles that they would like to read when offline. We will also be improving the messaging for users in these circumstances – they will be able to know which portions of the content they can access, as well as which portions of the content are unavailable while the user is offline. The project will also aim at making the website more cost-friendly to users by focusing on using less data when loading a page.
  42.  
  43. ———
  44.  
  45. Where can web devs go to learn more about this or stay abreast of what your team is up to?
  46. Jorge Vargas: They can always reach us at globalreach[at]wikimedia[dot]org to learn more about our work and what our team is up to.
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement