Transform your testing process with: Real Device Cloud , Company-wide Licences & Accessibility Testing

Understand What is Browser

Learn everything about Browser to test websites using Browser Automation on Real Devices

What is a Browser and How do they work?

By Kitakabee, Community Contributor - June 8, 2023

Web browsers play an integral role in the way we interact with the internet. They’re the gateway to the vast online universe, allowing us to shop, learn, communicate, entertain ourselves, and much more.

This article will help you understand the basics of web browsers, unraveling the technology that connects us to the countless corners of the internet. 

If you’ve ever been curious about how your request transforms into a full-fledged website on your screen or how a browser protects your data while ensuring an optimized experience, read on. 

What is Browser?

  • Evolution of Browsers 

Functions of a Browser

Components of a browser.

  • How does a Browser Work? 
  • Types of Browsers 

A web browser is a software that enables users to access and view content on the World Wide Web. Its primary function is to locate and retrieve web pages, images, videos, documents, and other files from servers and display them on the user’s device.

For instance, imagine you want to visit a website. Here’s where the browser comes in. When you type the website’s URL into the browser’s address bar and hit Enter, your browser sends a request to the server where the website’s files are stored. This communication happens over protocols such as HTTPS (Hypertext Transfer Protocol Secure) or HTTP (Hypertext Transfer Protocol).

On receiving your request, the server sends back the website’s files. These files are often written in languages like HTML, CSS, and JavaScript. Your browser’s job is to interpret this code and render it into the web page you see.

In essence, the browser acts as a bridge between you and the website, sending your requests to the server and translating the server’s response into a format you can easily interact with on your device. Without a browser, navigating the vast ocean of internet content would be nearly impossible.

Evolution of Browsers 

The first web browser – or browser editor – was called WorldWideWeb. It was invented by Sir Tim Berners-Lee in 1990.

During the initial era of the browsers, names like Mosaic, Netscape Navigator, and Internet Explorer dominated the industry. 

But the industry has evolved in the past three decades because of improvements in performance, security aspects, and better user customization.

Here is the list of modern browsers that rule the industry:

  • Google Chrome
  • Mozilla Firefox
  • Apple Safari
  • Microsoft Edge
  • Tor Browser
  • Samsung Internet

These modern browsers offer multiple features that help them deliver the best browsing experience to the users. 

Also Read: Understanding Browser Market Share: Which browsers to test on in 2023

A web browser serves a multitude of functions to enhance your browsing experience. Let’s explore some of the key features and functions of a browser:

  • Web Page Rendering: When you visit a website, the browser retrieves the website’s HTML, CSS, and JavaScript files from the server. It then interprets and processes this code to construct the web page you see on your device. The HTML defines the structure and content of the page, CSS styles the page’s appearance, and JavaScript adds interactivity and dynamic elements.
  • Navigation: Browsers provide an intuitive interface for navigating the internet. You can enter a website’s address (URL) directly into the address bar, and the browser will take you to that specific webpage. Additionally, you can click on hyperlinks within web pages to navigate to other related pages. Browsers also support bookmarks, which allow you to save and organize frequently visited websites for quick access.
  • Tabbed Browsing: Tabs revolutionized web browsing by allowing you to open multiple web pages within a single browser window. Instead of opening separate browser instances for each webpage, you can open new tabs, each representing a different webpage. This feature facilitates multitasking and makes it easy to switch between different websites without cluttering your screen .
  • Bookmarks and History: Browsers enable you to save your favorite websites as bookmarks. Bookmarks act as shortcuts, allowing you to quickly revisit those websites without having to remember their URLs. Browsers also maintain a history of the websites you’ve visited, providing a chronological record that you can browse through to revisit previously viewed web pages.
  • Search Functionality: Browsers often include a search bar, typically located in the toolbar. This search bar is integrated with popular search engines like Google, Bing, or DuckDuckGo. Instead of navigating to a search engine’s website, you can directly enter keywords or phrases into the search bar. The browser sends your search query to the chosen search engine, which then displays relevant search results.

Must Read: Popular Test Automation Frameworks

A browser has two elements: front-end and back-end . The complex back end facilitates the core functioning of the browser, and the front end interacts with the user. Let’s dive a bit deeper: 

  • Front-end: The front-end of a browser refers to the user-facing part that you interact with. It includes the graphical user interface (GUI) elements, such as the address bar, navigation buttons, bookmarks, and tabs. The front-end also handles the rendering of web pages, displaying the content, images, and interactive elements on your device’s screen. 
  • Back-end: The back-end of a browser encompasses the complex processes that occur behind the scenes. It handles the communication between the browser and web servers, fetching and managing web page resources, and processing the code that makes up the web pages. The back-end interprets HTML, CSS, and JavaScript files, ensuring that web pages are rendered correctly. It manages network connections, supports various protocols like HTTP and HTTPS, and handles security measures such as encryption and certificate verification..

The front-end and back-end components of a browser work together seamlessly to provide a rich and interactive browsing experience. 

When you interact with the front-end by typing a URL, clicking on links, or using browser features, the front-end communicates with the back-end to fetch the necessary web page resources. 

The back-end processes these resources and sends the rendered content back to the front-end for display on your device. This collaboration between the front-end and back-end enables you to navigate the internet, access websites, and interact with online content smoothly.

Apart from these two major elements, here are the components of a browser.

Components of a browser

  • User Interface: The user interface is the space where users interact with the browser. It encompasses elements such as bookmarks, an address bar for entering website URLs, back and forward buttons for navigation, tabs for multitasking, and menus for accessing various browser features and settings. The user interface provides a visually intuitive way for users to control and navigate the browser.
  • Browser Engine: The browser engine acts as the core of the browser, handling user interactions, rendering web pages, and facilitating communication with other components. It coordinates the flow of information between the user interface, rendering engine, and other browser components. The browser engine ensures that user actions, such as clicking a link or entering a URL, are properly processed and trigger the appropriate actions within the browser.
  • Rendering Engine: The rendering engine is responsible for displaying the content of web pages within the browser. It takes the HTML, CSS, and JavaScript code of a web page and converts it into a visual display that users can see. The rendering engine interprets the HTML structure, applies the CSS styles to determine the page’s layout and appearance, and executes any JavaScript code to add interactivity and dynamic elements to the web page.
  • JavaScript Interpreter: The JavaScript interpreter is a component within the browser that executes JavaScript code found on web pages. JavaScript is a programming language commonly used for adding interactivity and dynamic functionality to websites. The interpreter ensures that JavaScript code is properly executed, allowing web pages to respond to user actions, update content dynamically, and interact with APIs and other web technologies.
  • Networking: The networking component of a browser handles various aspects of network communication. It is responsible for resolving website URLs into IP addresses, sending HTTP requests to web servers, establishing network connections, and receiving and processing the responses. The networking component plays a crucial role in fetching web page resources, such as HTML, CSS, images, and other files, from servers and delivering them to the rendering engine for display.

Each browser component is important and they work together to give a seamless experience while using a browser.

Also Read: Understanding the Role of Rendering Engine in Browsers

How does a Browser Work? 

Browsers are responsible for retrieving and displaying web content to users. When a user enters a URL or clicks on a link, the browser initiates a complex series of actions to retrieve the web content from a server and display it on the user’s device.

How does a Browser Work

The process begins with Domain Name System (DNS) resolution, where the browser translates the domain name into an IP address to locate the server where the web page is stored. 

  • The browser then sends an HTTP request to the server, specifying the path and parameters of the requested resource.
  • Once the server receives the request, it sends an HTTP response to the browser containing the requested resource in HTML, CSS, and JavaScript code. 
  • The browser’s rendering engine interprets and renders the code to display the web page on the user’s device. 
  • The CSS stylesheets are applied to format the web page’s content, including fonts, colors, and layout.
  • The browser may also execute JavaScript code on the web page to add interactivity and dynamic behavior. 

As new content is loaded or changes are made to the web page, the browser updates the display accordingly.

Apart from the working principles, there are a few terms related to browsers that you must know.

Read More: Dynamic Rendering using HTML and CSS

BrowserStack Live Banner 8

Commonly used Jargons in Browser

A few commonly used jargon around browsers are:

  • URL: The universal resource locator (URL) addresses a unique resource on the web. 
  • HTML: HyperText Markup Language (HTML) is used for creating web applications and pages. 
  • HTTP : Hypertext Transfer Protocol (HTTP) allows the fetching of resources, like HTML documents.
  • HTTPS: Hypertext Transfer Protocol Secure (HTTPS) works as HTTP but with encryption for a secure server communication.
  • IP Address : It spots the location of a specific server connected to the internet.
  • DNS : Domain Name System is a database containing domain records.
  • Cookies : Cookies are small text files that are stored on the user’s device by a website. When a user visits a website, the website may create a cookie to track information about the user’s activity on the site or to remember the user’s preferences or login information.

Based on the browser’s type managing the cookies can vary.

Read More: ​​ How to handle Cookies in Selenium WebDriver

Types of Browsers 

There are several types of browsers available for users, including:

  • Desktop browsers: These are the most common browsers that users install on their desktop computers or laptops. Examples include Google Chrome, Mozilla Firefox, Microsoft Edge, Apple Safari, and Opera.
  • Mobile browsers: Browsers designed specifically for mobile devices such as smartphones and tablets are called mobile browsers. Examples include Google Chrome for Android and iOS, Safari for iOS, Firefox for Android, and Opera for Android and iOS.
  • Console browsers : These are designed for game consoles such as Xbox and PlayStation, allowing users to browse the web from their consoles.
  • Text-based browsers : Legacy browsers that only display websites as text, without graphics or images, are text-based. Examples include Lynx and Elinks.

Different browsers have their interpretations of Open Web Standards. 

Since they each render CSS, HTML, and JavaScript uniquely, thoroughly debugging your website’s code is not enough to ensure that your website will look and behave as intended on multiple browsers.

This is where browser compatibility becomes crucial. Browser compatibility refers to the ability of a website or web application to function consistently across different browsers and their various versions. It ensures that users receive a consistent user experience regardless of the browser they use to access the website.

People use different browsers based on their personal preferences, device compatibility, and platform availability. Some popular browsers include Google Chrome, Mozilla Firefox, Microsoft Edge, Safari, and Opera. Each of these browsers has its own rendering engine, which interprets and displays web content in its unique way. This diversity makes it essential for website developers and companies to test their websites and applications on different browsers.

Companies invest in browser compatibility testing to ensure that their websites and applications are accessible, visually appealing, and function properly across multiple browsers. By testing their web assets on different browsers, companies can identify and address any inconsistencies or compatibility issues that may arise. This ensures that their target audience, regardless of their preferred browser, can have a consistent and satisfactory experience.

Additionally, companies need to consider the diverse range of devices and operating systems used by their audience. Websites and applications must be tested across various platforms, including desktop computers, laptops, tablets, and smartphones, to ensure optimal performance and user experience. 

That’s where you can rely on cross-browser testing, which helps pinpoint browser-specific compatibility errors so you can debug them quickly. 

It helps ensure you’re not alienating a significant part of your target audience–simply because your website does not work on their browser OS.

Talk to an Expert

Read More: The ultimate responsive design testing checklis t

Browsers have been part of the internet ecosystem since the start. The future of browsers will likely revolve around several key areas:

  • Privacy and Security: As privacy concerns grow, browsers are expected to prioritize user privacy by implementing enhanced security measures, stricter data protection, and better control over personal information.
  • Performance and Speed: Browsers will continue to strive for improved performance and faster loading times, enabling users to access web content quickly and efficiently.
  • Integration with Devices and Platforms : With the increasing prevalence of interconnected devices and platforms, browsers will focus on seamless integration and compatibility across different devices, operating systems, and platforms.

To ensure a smooth user experience across multiple web applications, it is crucial to have access to a robust testing platform. 

BrowserStack offers a comprehensive solution by providing a real device cloud with instant, on-demand access to over 3000 browsers and devices. This allows developers and testers to perform extensive testing across a wide range of configurations and ensure compatibility across various browsers and devices.

It offers a real device cloud with instant, on-demand access to over 3000 browsers and devices.

Try BrowserStack Now 

We're sorry to hear that. Please share your feedback so we can do better

Related articles.

Understanding Browser Market Share: Which browsers to test on in 2023

Understanding Browser Market Share: Which browsers to test on in 2023

Browsers are fundamental for website market. Let’s understand Browser Market Share, and choose whi...

What is Browser Automation

What is Browser Automation?

Learn how Browser Automation can help you save lots of time & effort, using automation framework...

What is Headless Browser Testing

What is Headless Browser and Headless Browser Testing?

Explore Headless Browsers & how Headless Browser testing increases the efficiency of testing you...

Featured Articles

Browser testing on 3500+ real devices, test on 3500+ real devices & browsers.

Try BrowserStack Live to test on 3500+ real Devices and Browser combinations under real world conditions for accurate test results and first hand user-like experience

Martin Schneider

Contact sales

Help us with your details & our sales team will get back with regarding our new team wide plans.

Get in touch with us

Please share some details regarding your query

Request received!

We will respond back shortly to

In the meantime, here are some resources that might interest you:

2

Meanwhile, these links might interest you:

Firefox is no longer supported on Windows 8.1 and below.

Please download Firefox ESR (Extended Support Release) to use Firefox.

Download Firefox ESR 64-bit

Download Firefox ESR 32-bit

Firefox is no longer supported on macOS 10.14 and below.

What is a web browser?

A web browser takes you anywhere on the internet, letting you see text, images and video from anywhere in the world.

essay about web browser

The web is a vast and powerful tool. Over the course of a few decades, the internet has changed the way we work, the way we play and the way we interact with one another. Depending on how it’s used, it bridges nations, drives commerce, nurtures relationships, drives the innovation engine of the future and is responsible for more memes than we know what to do with.

It’s important that everyone has access to the web, but it’s also vital that we all understand the tools we use to access it. We use web browsers like Mozilla Firefox, Google Chrome, Microsoft Edge and Apple Safari every day, but do we understand what they are and how they work? In a short period of time we’ve gone from being amazed by the ability to send an email to someone around the world, to a change in how we think of information. It’s not a question of how much you know anymore, but simply a question of what browser or app can get you to that information fastest.

In a short period of time, we’ve gone from being amazed by the ability to send an email to someone around the world, to a change in how we think about information.

How does a web browser work?

A web browser takes you anywhere on the internet. It retrieves information from other parts of the web and displays it on your desktop or mobile device. The information is transferred using the Hypertext Transfer Protocol, which defines how text, images and video are transmitted on the web. This information needs to be shared and displayed in a consistent format so that people using any browser, anywhere in the world can see the information.

Sadly, not all browser makers choose to interpret the format in the same way. For users, this means that a website can look and function differently. Creating consistency between browsers, so that any user can enjoy the internet, regardless of the browser they choose, is called web standards .

When the web browser fetches data from an internet connected server, it uses a piece of software called a rendering engine to translate that data into text and images. This data is written in Hypertext Markup Language (HTML) and web browsers read this code to create what we see, hear and experience on the internet.

Hyperlinks allow users to follow a path to other pages or sites on the web. Every webpage, image and video has its own unique Uniform Resource Locator (URL), which is also known as a web address. When a browser visits a server for data, the web address tells the browser where to look for each item that is described in the html, which then tells the browser where it goes on the web page.

Cookies (not the yummy kind)

Websites save information about you in files called cookies . They are saved on your computer for the next time you visit that site. Upon your return, the website code will read that file to see that it’s you. For example, when you go to a website, the page remembers your username and password – that’s made possible by a cookie.

There are also cookies that remember more detailed information about you. Perhaps your interests, your web browsing patterns, etc. This means that a site can provide you more targeted content – often in the form of ads. There are types of cookies, called third-party cookies, that come from sites you’re not even visiting at the time and can track you from site to site to gather information about you, which is sometimes sold to other companies. Sometimes you can block these kinds of cookies, though not all browsers allow you to.

When you go to a website and the page remembers your username and password – that’s made possible by a cookie.

Understanding privacy

Nearly all major browsers have a private browsing setting. These exist to hide the browsing history from other users on the same computer. Many people think that private browsing or incognito mode will hide both their identity and browsing history from internet service providers, governments and advertisers. They don’t. These settings just clear the history on your system, which is helpful if you’re dealing with sensitive personal information on a shared or public computer. Firefox goes beyond that.

Firefox helps you be more private online by letting you block trackers from following you around the web.

Making your web browser work for you

Most major web browsers let users modify their experience through extensions or add-ons. Extensions are bits of software that you can add to your browser to customize it or add functionality. Extensions can do all kinds of fun and practical things like enabling new features, foreign language dictionaries, or visual appearances and themes.

All browser makers develop their products to display images and video as quickly and smoothly as possible, making it easy for you to make the most of the web. They all work hard to make sure users have a browser that is fast, powerful and easy to use. Where they differ is why. It’s important to choose the right browser for you. Mozilla builds Firefox to ensure that users have control over their online lives and to ensure that the internet is a global, public resource, accessible to all.

SoftwareLab Logo

What is a Web Browser? Types and Examples You Need to Know

By Tibor Moes / Updated: July 2023

What is a Web Browser? Types and Examples You Need to Know

What is a Web Browser?

It’s impossible to use the internet in the modern world without a web browser. This powerful tool takes you across the web and shows you images, text, videos, and any other content type.

Maybe you don’t know it yet, but you’re using a web browser right now to read this article! There are plenty of browsers out there and they all work using (more or less) the same technology that we will explain below.

  • A web browser is a software application, acting as a user interface, that allows users to access, navigate, and interact with internet content through HTTP, often in the form of web pages.
  • Core components of a web browser include the rendering engine to interpret and display HTML documents, JavaScript engine for dynamic content, and network components for data communication, providing an integral user experience.
  • Browsers also provide critical features like bookmarking, privacy modes, extensions for added functionalities, and security measures like phishing and malware detection to ensure safe and efficient web browsing.

Don’t become a victim of cybercrime. Protect your devices with the   best antivirus software   and your privacy with the   best VPN service .

A web browser can be defined as a   computer program that the user relies on to access information or sites on the World Wide Web   or similar networks.

So how does a web browser work?

Not all web browsers work the same way. Some of them may interpret different formats in different ways. This is bad news for the user, given that a website may look different to them depending on the browser they use. That’s why it’s important to create consistency between different browser applications. For this reason, there are certain   web standards   that are used.

Web browsers work by   talking to a server and asking for particular pages   users want to visit. The browser program will retrieve or fetch the code that’s often written in   HyperText Markup Language (HTML)   or similar languages.

Once it does that, the browser will interpret the code behind the script or language and show it as a web page the user wants to see.

Most of the time, this action requires user interaction in order for the browser to know which website or page to show. This means that you as a user have to use the address bar of the browser and enter the URL of the website you want to visit.

But what exactly is a URL and why is it important? Learn more below.

The Story of URLs

A web address of a website   is provided in its URL form. The acronym “URL” stands for “Uniform Resource Locator,” and it’s the type of information that lets the browser know which site you want to visit.

For example, when you enter the following URL into the address bar of a browser:   http://www.google.com , a browser will take you to the Google search engine.

What the browser did was study the URL in two ways.

First, it studied the “http://” section that refers to HyperText Transfer Protocol. This is the protocol used for requesting and transmitting files on the internet, and it can be found on most web pages. It defines how images, text, and other content are transmitted on the internet.

It’s important for this type of information to be transmitted consistently so anyone using a browser can access the information. A browser will know how to interpret the data located on the right of the forward slashes because it knows the HTTP protocol.

Next, the browser will examine the   domain name,   which is   www.google.com   in this example. The domain name lets the browser know the location of the server from which it will have to retrieve a page.

If you were to use a web browser ten or more years ago, you would have to type in the whole domain name   http://www.google.com . But web browsers are smarter today and no longer require you to specify the protocol. You can now simply type   google.com   and be taken to the desired page.

You can often find additional information or parameters at the end of a page. For example, if you were to visit the Nike website, you can find parameters such as   http://www.nike.com/women   that share more information about a particular web page on one website. For example, the “women” parameter lets the browser know you’re asking to see a women’s section on Nike’s website.

Browsers let users   open multiple links   or URLs at the same time. This is possible thanks to tabs. A tab creates a dedicated space for a website inside the same browser window. This prevents the program from cluttering your screen with different windows. It’s meant to emulate an old-fashioned cabinet of file folders.

Bookmarks and History

Web browsers have a great functionality that lets users access websites they want to visit at a later date. Users can do so with the help of   bookmarks that allow saving pages inside a browser.

There’s also an easy way to   access a list of all your previously visited pages   that can be found in the “History” section.

Introducing Cookies

If you ever visited a website on the web, you must have seen a cookie notice at the bottom of the page. Unfortunately, this isn’t a notice about the yummy cookies hidden on your kitchen shelf.

In the world of web browsers, cookies relate to   information that websites save   about their users. These files are saved locally on your computer so when you visit the site again, your browser can open the page faster. Also, the website will recognize that it’s you who wants to visit and may remember your login credentials.

There are also more advanced cookies that are made to   remember detailed information about users . This can be the browsing pattern, personal interests, and more. This behavior is performed to provide a more customized experience, and it’s mostly used by businesses that promote their services.

You can also encounter   third-party cookies   that come from different websites you aren’t using. They can track your activity on another site and sell the information they get to companies. You can block this kind of behavior, but not all browsers will let you do that.

Now that we have covered the basics of what a browser is and its main features, let’s introduce some of the most popular browsers available today.

Web Browser Examples

There are dozens of web browsers to choose from today. Each example has its own nuance that makes some users prefer it over another.

The best programs out there are completely free. The options regarding interface, security, shortcuts, and other elements are different, so you can choose the one that works best for you.

Here’s an overview of the most popular browsers.

Google Chrome

Google Chrome   is arguably   the most popular browser   today. It’s developed by Google, and it has the biggest web browser market share, with a whopping   65.87%   as of June 2023. If you were to look for the best browsers on the internet, you’d find Chrome to be the winner in the Best Overall category.

This browser works with   all operating systems , it’s fast and expandable, and allows cross-syncing between devices. This cross-platform browser is easy to use and has a dedicated feature for using less data. You can browse without saving the browsing history and use the incognito mode.

Some other notable features include an offline download manager, security alerts, and personalized recommendations.

Safari , or Apple Safari, has the second-biggest market share among web browsers with 18.61%, and it’s the default browser for Apple devices.

If you’re an Apple user, you’ll find the Safari browser to be powerful, efficient, and secure.

Safari is the   first browser to introduce a reading mode   to its users. This option will clear unnecessary elements from the page so you can focus on reading or watching a video without distractions.

The browser was also the first to introduce   fingerprinting protection . This feature prevents web trackers from identifying you according to your system specifications, which is a very common issue found in most other browsers.

New versions of Safari also allow for added customization options and provide a very modern browsing experience. You can use the company’s Handoff feature and continue your browsing sessions between devices.

This browser   only operates on Apple devices , which is its main downside for non-Apple users.

The   Opera   browser is great for collecting content. It   works on all operating systems , and it’s completely   free.   Some of the best reasons to use Opera include its built-in proxy, excellent security, and great interface.

There’s a   built-in ad blocker, as well as a VPN , so you can use the browser for a safer internet experience. This is especially important if you enter sensitive information on the web such as your phone number, address, crypto wallet information, and other personal or financial data.

Gamers will love the special browser version designed solely for gamers –   Opera GX . The browser includes Twitch integration, Razer Chroma support, and other features most gamers appreciate.

Chrome and Opera use the same Chromium-based technology, so you can use the Chrome store to add different integrations and add-ons to Opera.

The   Mozilla   browser is one of the best applications   for private browsing,   as well as for   power users . It’s one of the most flexible browsers out there and comes with cross-platform syncing. This means you can use the browser on your computer, mobile device, and tablet, and save your log-in information, passwords, or browsing history across devices.

This browser also has excellent privacy protection, and it’s endlessly customizable in terms of plug-ins, extensions, and theme support.

Some downsides include the app being a bit slower compared to the competitors. The program also uses more system memory than other browsers.

Microsoft Edge

Microsoft Edge   is the cousin of a once extremely popular browser called Internet Explorer designed by Microsoft. Edge is now the default browser for Windows devices. The browser uses the same code for rendering pages as Chrome, called Chromium, which means you can download add-ons, extensions, and integrations using the Chrome Store.

Edge runs just as well on macOS devices, so you can try it out if you’re a Mac user. This browser performs great when it comes to thrifty memory, disk usage, and overall performance. The developers use a new Startup Boost technology to reduce the time for opening a browser and its sleeping tabs.

Smaller Players Worth Noting

In the overview above, we listed some of the biggest players in the web browser world. There are many smaller names that offer quality services.

Vivaldi   is the best alternative browser when it comes to   customization options . It works on Chromium, so it’s closely related to Google Chrome. The best part about the experience is that it lets users change even the smallest details about the program. The interface is similar to Opera, so the tab previews, start page, buttons, and other tools look very familiar.

Some of the browser’s unique features include an Image Properties view with histogram, clutter-free printing, screenshot options, and more.

Brave   is a popular alternative web browser that strives to   reshape the web economy   from the ground up. The browser blocks web ads by default, and it introduces an innovative way for websites to monetize users’ attention. It rewards users for browsing by offering them their own company-made cryptocurrency. This makes Brave a popular option for users interested in the crypto world and tokens.

Like many other browsers on the list, this one is also based on Chromium, so you can find similarities with Google Chrome, Opera, and other browsers from the same family.

Tor Browser

Tor   is a great browser for users concerned about privacy who are not interested in the world of ads. The software offers   access to the dark web , which is the ad- and tracking-free world of the internet. Any traffic users make on Tor is encrypted in a company-specific way that makes it impossible to track.

The browser is based on Firefox, and it opens most websites just fine despite some privacy extensions and settings being locked.

A major downside of this browser is that the heavy encryption significantly slows down the speed of internet browsing.

Web Browsers Explained

Web browsers are powerful tools all internet users rely on for easy access to websites, web pages, images, text, and any other content. It’s important to understand how web browsers work to be able to get the most out of them, and this is exactly what this article aims to explain.

Now that you understand how browsers work, you can choose the best browser for your particular needs to make your daily internet experience better and more customized.

How to stay safe online:

  • Practice Strong Password Hygiene : Use a unique and complex password for each account. A password manager can help generate and store them. In addition, enable two-factor authentication (2FA) whenever available.
  • Invest in Your Safety : Buying the best antivirus for Windows 11 is key for your online security. A high-quality antivirus like Norton , McAfee , or Bitdefender will safeguard your PC from various online threats, including malware, ransomware, and spyware.
  • Be Wary of Phishing Attempts : Be cautious when receiving suspicious communications that ask for personal information. Legitimate businesses will never ask for sensitive details via email or text. Before clicking on any links, ensure the sender's authenticity.
  • Stay Informed. We cover a wide range of cybersecurity topics on our blog. And there are several credible sources offering threat reports and recommendations, such as NIST , CISA , FBI , ENISA , Symantec , Verizon , Cisco , Crowdstrike , and many more.

Happy surfing!

Frequently Asked Questions

Below are the most frequently asked questions.

Is Google a web browser or not?

Google is an example of a search engine, not a web browser. You can use Google on different web browsers to perform your internet search. Google Chrome, however, is a web browser of the same company.

How do I open a web browser?

Web browsers can be downloaded to your local computer or mobile device space and used from there. All you have to do is install the program and launch it whenever you need to use it.

What are the most popular web browsers?

The most popular web browsers include Google Chrome, Mozilla Firefox, Apple Safari and Microsoft Edge.

Author: Tibor Moes

Author: Tibor Moes

Founder & Chief Editor at SoftwareLab

Tibor has tested 39 antivirus programs and 30 VPN services , and holds a Cybersecurity Graduate Certificate from Stanford University.

He uses Norton to protect his devices, CyberGhost for his privacy, and Dashlane for his passwords.

You can find him on LinkedIn or contact him here .

Antivirus Comparisons

Best Antivirus for Windows 11 Best Antivirus for Mac Best Antivirus for Android Best Antivirus for iOS

Antivirus Reviews

Norton 360 Deluxe Bitdefender Total Security TotalAV Antivirus McAfee Total Protection

CERN Accelerating science

home

  • short history web

A short history of the Web

The Web has grown to revolutionise communications worldwide

Where the Web was born

Tim Berners-Lee, a British scientist, invented the World Wide Web (WWW) in 1989, while working at CERN. The Web was originally conceived and developed to meet the demand for automated information-sharing between scientists in universities and institutes around the world.

WWW,Web,CERN50,Golden Jubilee Photos

CERN is not an isolated laboratory, but rather the focal point for an extensive community that includes more than 17 000 scientists from over 100 countries. Although they typically spend some time on the CERN site, the scientists usually work at universities and national laboratories in their home countries. Reliable communication tools are therefore essential.

The basic idea of the WWW was to merge the evolving technologies of computers, data networks and hypertext into a powerful and easy to use global information system.

How the Web began

Hypertext,Document retrieval,Information management,web,Project control,Computers and Control Rooms

Tim Berners-Lee wrote the first proposal for the World Wide Web in March 1989 and his second proposal in May 1990 . Together with Belgian systems engineer Robert Cailliau, this was formalised as a management proposal in November 1990. This outlined the principal concepts and it defined important terms behind the Web. The document described a "hypertext project" called "WorldWideWeb" in which a "web" of "hypertext documents" could be viewed by “browsers”.

By the end of 1990, Tim Berners-Lee had the first Web server and browser up and running at CERN, demonstrating his ideas. He developed the code for his Web server on a NeXT computer. To prevent it being accidentally switched off, the computer had a hand-written label in red ink: " This machine is a server. DO NOT POWER IT DOWN!! "

NEXT,WWW,Computers and Control Rooms

info.cern.ch was the address of the world's first website and Web server, running on a NeXT computer at CERN. The first Web page address was http://info.cern.ch/hypertext/WWW/TheProject.html

This page contained links to information about the WWW project itself, including a description of hypertext, technical details for creating a Web server, and links to other Web servers as they became available.

web,Hypertext,Computer,NEXT

The WWW design allowed easy access to existing information and an early web page linked to information useful to CERN scientists (e.g. the CERN phone book and guides for using CERN’s central computers). A search facility relied on keywords - there were no search engines in the early years.

Berners-Lee’s original Web browser running on NeXT computers showed his vision and had many of the features of current Web browsers. In addition, it included the ability to modify pages from directly inside the browser – the first Web editing capability. This screenshot shows the browser running on a NeXT computer in 1993 .

The Web extends

Only a few users had access to a NeXT computer platform on which the first browser ran, but development soon started on a simpler, ‘line-mode’ browser , which could run on any system. It was written by Nicola Pellow during her student work placement at CERN.

In 1991, Berners-Lee released his WWW software. It included the ‘line-mode’ browser, Web server software and a library for developers. In March 1991, the software became available to colleagues using CERN computers. A few months later, in August 1991, he announced the WWW software on Internet newsgroups and interest in the project spread around the world.

Going global

Thanks to the efforts of Paul Kunz and Louise Addis, the first Web server in the US came online in December 1991, once again in a particle physics laboratory: the Stanford Linear Accelerator Center (SLAC) in California. At this stage, there were essentially only two kinds of browser. One was the original development version, which was sophisticated but available only on NeXT machines. The other was the ‘line-mode’ browser, which was easy to install and run on any platform but limited in power and user-friendliness. It was clear that the small team at CERN could not do all the work needed to develop the system further, so Berners-Lee launched a plea via the internet for other developers to join in. Several individuals wrote browsers, mostly for the X-Window System. Notable among these were MIDAS by Tony Johnson from SLAC, Viola by Pei Wei from technical publisher O'Reilly Books, and Erwise by Finnish students from Helsinki University of Technology.

Early in 1993, the National Center for Supercomputing Applications (NCSA) at the University of Illinois released a first version of its Mosaic browser. This software ran in the X Window System environment, popular in the research community, and offered friendly window-based interaction. Shortly afterwards the NCSA released versions also for the PC and Macintosh environments. The existence of reliable user-friendly browsers on these popular computers had an immediate impact on the spread of the WWW. The European Commission approved its first web project (WISE) at the end of the same year, with CERN as one of the partners. On 30 April 1993, CERN made the source code of WorldWideWeb available on a royalty-free basis, making it free software. By late 1993 there were over 500 known web servers, and the WWW accounted for 1% of internet traffic, which seemed a lot in those days (the rest was remote access, e-mail and file transfer). 1994 was the “Year of the Web”. Initiated by Robert Cailliau, the First International World Wide Web conference was held at CERN in May. It was attended by 380 users and developers , and was hailed as the “Woodstock of the Web”.

As 1994 progressed, stories about the Web hit the media. A second conference, attended by 1300 people, was held in the US in October, organised by the NCSA and the newly-formed International WWW Conference Committee (IW3C2). By the end of 1994, the Web had 10 000 servers - 2000 of which were commercial - and 10 million users. Traffic was equivalent to shipping the entire collected works of Shakespeare every second. The technology was continually extended to cater for new needs. Security and tools for e-commerce were the most important features soon to be added.

Open standards

An essential point was that the web should remain an open standard for all to use and that no-one should lock it up into a proprietary system. In this spirit, CERN submitted a proposal to the Commission of the European Union under the ESPRIT programme: “WebCore”. The goal of the project was to form an international consortium, in collaboration with the US Massachusetts Institute of Technology (MIT). In 1994, Berners-Lee left CERN to join MIT and founded the International World Wide Web Consortium (W3C). Meanwhile, with approval of the LHC project clearly in sight, CERN decided that further web development was an activity beyond the laboratory’s primary mission. A new European partner for W3C was needed.

The European Commission turned to the French National Institute for Research in Computer Science and Controls (INRIA), to take over CERN's role. In April 1995, INRIA became the first European W3C host, followed by Keio University of Japan (Shonan Fujisawa Campus) in Asia in 1996. In 2003, ERCIM (European Research Consortium in Informatics and Mathematics) took over the role of European W3C Host from INRIA. In 2013, W3C announced Beihang University as the fourth Host. In September 2018, there were more than 400 member organisations from around the world.

Advisory boards aren’t only for executives. Join the LogRocket Content Advisory Board today →

LogRocket blog logo

  • Product Management
  • Solve User-Reported Issues
  • Find Issues Faster
  • Optimize Conversion and Adoption
  • Start Monitoring for Free

The most important features of all major browsers

essay about web browser

Browsers over the years have evolved and keep evolving. In the early days, the contents of web pages were much more basic.

An image of three logos for major browsers.

There was the Nexus: the first-ever browser that was created by Tim Berners-Lee.

Thereafter, there was Lynx, which was a text-based browser; the Mosaic, which was the first browser to allow embedded images within the text; then, 20 years later browsers became much more sophisticated as web technologies grew.

There are, however, unique features that have stood out among many browsers over the years. These features, enhancements, and improvements are usually a result of improvements in web technologies and the various browser vendors vying for a piece of the market.

In this article, we will be looking at all the major browsers in the world today and what features make them important.

One of the world’s most popular web browsers. It was developed by the Mozilla foundation as a free and open-source web browser.

The usage of this browser peaked in 2009 with 32.21% usage, according to Net Market Share , but that figure has since dropped as a result of competition from other browsers. Its usage stands at 7.11% as of September 2020.

Firefox uses the Gecko rendering engine and SpiderMonkey as its JavaScript engine.

What features make up Firefox?

Firefox is, by default, privacy-driven. It has continuously improved its privacy features. This includes a variety of modern anti-tracking technologies that block out ad trackers and clickbaits.

The security of browsers is quite important, and Firefox does a decent job in this regard. By default, third-party trackers that are used for re-targeted advertising are blocked in all Firefox releases since 2019.

Developer tool features

Firefox comes pre-built with standard developer tools, like console and viewport sizes for developers to check web page responsiveness. However, Mozilla even went further to provide extra features in its later Firefox Developer Edition, which has a variety of better tools for developers.

essay about web browser

Over 200k developers use LogRocket to create better digital experiences

essay about web browser

The developer edition includes a CSS grid tool, which provides easy visual support for developers to build their custom CSS grids.

It also includes other tools for visually editing a web page, such as fonts-adjustment and general style editing.

It also packs in performance tools for web optimization and memory tools for debugging application memory leaks.

This browser also has an in-built JSON previewer that automatically renders JSON files in an easy-to-view format.

In addition, it has an impressive debugging capability for JavaScript files. The in-built debugger can view variable values at every point of debugging, stepping through each call stack in the code, adding breakpoints, and conditional breakpoints! All in dark mode!

The Firefox debugging console.

This enables the Firefox developer edition to provide a very good developer experience.

Support for web technologies

Firefox currently has the second-highest browser score on caniuse.com , a website dedicated to delivering information about the compatibility of current web technologies and all major browsers.

All versions since 2017 support CSS grid and all versions released since 2014 supports CSS flexbox.

The WebP image format is supported by version 65 (released in January 2019). Javascript ES6 is fully supported on all versions of Firefox released since 2017.

The Edge browser is built by Microsoft as a better alternative to the discontinued Internet Explorer. The Internet Explorer released in 1995 has been on every Microsoft Operating system until it was discontinued. The Edge came as a way for Microsoft to bring users back to its browser.

The initial build of Edge used EdgeHTML as its browser engine and Chakra as its Javascript engine. This version is now known as the Microsoft Edge Legacy, for which support will end on March 9, 2021, according to Microsoft support .

The new Edge browser released in January 2020 is now based on Google’s open-source project – Chromium. It uses the Blink browser engine and V8 engine for Javascript.

Microsoft Edge homepage.

What are its features?

Accessibility features

Microsoft does well to introduce some accessibility features in its Edge browser. One important feature is the Read Aloud feature which enables users to instantly read the contents of any web page. This feature is also available for reading PDF files opened within the browser.

The Edge Read aloud.

It also has the immersive reader feature, by pressing the F9 key on a windows laptop, the Edge browser instantly converts the webpage to an efficient and less distracting reading interface.

PWA feature

The new chromium-based Edge now has an install as app feature that allows users to install Progressive Web Applications as an app directly on their device.

Extensions feature

Edge now has extensions that can be installed on the browser to provide additional abilities. Although, Microsoft’s extension store is still in the beta phase.

It has support for most of the web technologies supported by other Chromium-based browsers. It has full support for ES6 and ES6 classes, the Fetch API, FileReader API, Web cryptography and others.

Google Chrome is arguably the world’s popular web browser and the browser with the largest market share, currently at a usage rate of 69.13% according to Net Market Share .

This browser was first released in 2008 and has since taken over the browser market. Its open-sourced Chromium engine has been mostly preferred by users, due to its speed and flexibility.

Google Chrome background.

Chrome does well in the areas of security with its auto-generation of strong passwords. This makes form registrations involving passwords seamless and directly saves the password in Google’s cloud password manager, passwords.google.com .

Chrome’s extension market is arguably the largest. With over ten thousand extensions, users can find extensions for almost anything they need on the web.

Chrome has a profile feature that does immediate sync with the logged-in user’s account and saves the user’s browsing data directly to the user’s Google account, such that when the user logs into another Chrome browser on another computer their browser data will be migrated to the new computer.

Developer Tools

The Chrome devtools are quite popular among users. It enables developers to have a myriad of features that could even be further improved with extensions. The typical devtools range from Elements inspection to console for JavaScript errors, the network tab to view files loading requests and API calls, and other performance tools such as the Lighthouse, which enables users to accurately measure the performance of their website or web apps, the SEO of such websites, and the speed.

Metrics for a progressive web app.

Added abilities can also be installed such as React devtools for building React applications and Vue devtools for Vue.js and Nuxt.js apps, as well as a host of other framework tools.

Apple’s Safari browser was released in 2003 for the Macintosh OS. The browser has since been on every Apple device to date.

The browser uses Apple’s Webkit engine and the Nitro JavaScript engine. According to Net Market Share , the Safari browser currently has 3.69% usage worldwide.

The last version of Safari known to be released for Windows OS was Safari 5.1.7, released in 2012.

Hence, unlike other browsers, the later Safari is currently only available for macOS.

Safari background.

What are Safari’s most important features?

Privacy and security

Safari does a decent job of securing its users. The browser has an effective third-party tracking block ability, which prevents third-party cookies from tracking users with targeted ads across the web.

It also offers fingerprinting protection.

A browser fingerprint is a piece of information that is collected about a device via the browser’s interaction with the device. This information can be as detailed as the Operating system your device is running, the device specs, device language, and your device’s unique canvas fingerprint, which can enable anyone to identify your device over the web even without having cookies on the device.

There is also the sandboxing feature that Apple mentions, which protects users’ devices from malicious scripts on the web, such that every tab opens in its own sandbox and cannot infect other tabs or system files when compromised.

Safari also has an extension store where users can find beneficial extensions to improve their browsing experience. It may not be as robust as that of other browsers, but it does have some extensions that might not be found in other browsers.

Safari has PWA support. It also has support for some ES6 APIs such as Geolocation API. In addition, it has partial support for Media Capture from DOM Element APIs such as capture from <video> , <audio> and <canvas> . However, excluding <video> and <audio> .

Safari also has full support for the Web Animations API, among others.

It has full support for CSS3 box-sizing, box-shadow, tab size, colors, grab and grabbing cursors, CSS 3 cursors (not available on iOS Safari), and CSS opacity in versions 14 and TP.

The opera browser built by Opera Software ASA is a household name in the browser market. This browser has been undergoing active development since it was first released around 1995, making it the oldest in such regard.

The Opera browser uses the Blink layout engine, same as Chrome and now Edge. Its former JavaScript engine was known as Carakan, which it later dropped for the V8.

Its interface design is different from other browsers, which operate the top-to-bottom browser design. Opera’s features open on a left pane.

According to NetMarketShare, it has a usage rate of 1.2%.

Opera background.

Its features include:

Opera has been big on maintaining the privacy of its users. It is the first major browser to offer a free built-in VPN (Virtual Private Network). This allows users to surf the web while maintaining anonymity. It can also block ads, trackers, and unrequested pops.

Gaming support

Opera is also the first major browser to have outright gaming support through its Opera GX browser. The Opera GX was specifically designed and built for online gaming environments.

It also has tooling to control system resource usage, such as RAM usage limit, CPU usage limit, Network Usage limit. and related meters to inform Gamers about how much system resources are used while gaming.

The GX background for Opera.

Inter-device sync

Opera has a feature known as My Flow, which allows the desktop app to instantly share files and information with a mobile device via its mobile app: Opera touch. The mobile app authenticates the desktop app by scanning a QR code.

This allows for quick files and/or text sharing from desktop to mobile. Users can also share bookmarks and links.

Opera supports most of the web technologies supported by other major browsers including CSS pseudo-element selectors, CSS paint API, which is only available on Chrome and Opera (still experimental in Safari), Javascript Geolocation API, indexedDB, MediaRecorder API (still experimental in Safari), Web USB, which allows web communication with devices via USB, it only available on Chrome and Opera, Web Bluetooth, and others.

It is worthwhile to mention that there are other browsers that users also find quite useful. These include Vivaldi, an Opera-styled Chromium-based browser which is considered to be quite fast and private.

The Brave browser founded by the JavaScript developer Brendan Eich as a very popular privacy-focused browser. Brave has shields , which are deployed to block all kinds of trackers and ads.

It is also known to be very fast. In fact, it claims to be the fastest. Brave browser can also directly install Chrome extensions.

Likewise, the UC browser for mobile, which has a considerable share of the browser market with over 100 million users globally.

Below is a table of the current usage statistics of the browsers captured by Net Market Share .

Chrome currently holds a huge percentage of the market, but browsers like Firefox and Edge are maintaining their race for browser dominance. Internet Explorer still has a considerable userbase. which tends to be businesses yet to migrate some of their tools to a new browser.

What features do you like in browsers, and which ones do you use?

LogRocket : Debug JavaScript errors more easily by understanding the context

Debugging code is always a tedious task. But the more you understand your errors, the easier it is to fix them.

LogRocket allows you to understand these errors in new and unique ways. Our frontend monitoring solution tracks user engagement with your JavaScript frontends to give you the ability to see exactly what the user did that led to an error.

LogRocket records console logs, page load times, stack traces, slow network requests/responses with headers + bodies, browser metadata, and custom logs. Understanding the impact of your JavaScript code will never be easier!

Try it for free .

Share this:

  • Click to share on Twitter (Opens in new window)
  • Click to share on Reddit (Opens in new window)
  • Click to share on LinkedIn (Opens in new window)
  • Click to share on Facebook (Opens in new window)
  • #vanilla javascript

Would you be interested in joining LogRocket's developer community?

Join LogRocket’s Content Advisory Board. You’ll help inform the type of content we create and get access to exclusive meetups, social accreditation, and swag.

essay about web browser

Stop guessing about your digital experience with LogRocket

Recent posts:.

Exploring The React Compiler: A Detailed Introduction

Exploring the React Compiler: A detailed introduction

The new React Compiler promises to streamline frontend development with React by eliminating the need for manual memoization and optimization.

essay about web browser

Leveraging Perplexity AI for frontend development

Perplexity AI is a powerful tool that can transform how you approach development tasks, research information, and more.

essay about web browser

User agent detection and the ua-parser-js license change

Explore the recent license change for ua-parser-js, a library for user agent detection.

essay about web browser

Ant Design adoption guide: Overview, examples, and alternatives

In this adoption guide, we’ll explore Ant Design’s key features of Ant Design, see how to get started using it in a React project, and more.

essay about web browser

Leave a Reply Cancel reply

Privacy and Security Comparison of Web Browsers: A Review

  • Conference paper
  • First Online: 31 March 2022
  • Cite this conference paper

essay about web browser

  • R. Madhusudhan 12 &
  • Saurabh V. Surashe 12  

Part of the book series: Lecture Notes in Networks and Systems ((LNNS,volume 451))

Included in the following conference series:

  • International Conference on Advanced Information Networking and Applications

1305 Accesses

1 Citations

In today’s digital world, mobile phones, computers, laptops and other digital devices are the most important things. The global pandemic like, Covid-19 has drastically changed the whole world, and in the post Covid world, most of the businesses are switching to online business, online marketing, online customer service, etc. Due to this, the usage of web browsers has increased exponentially. So, when we are using the internet to a very much great extent then, there are also chances of getting tracked, hacked, or cyber-bullying, etc. Hence, there comes the need for privacy protection while internet surfing in web browsers. In this paper, we have surveyed different research papers. This paper focuses on the popular desktop browsers on Windows such as Google Chrome, Mozilla Firefox, Microsoft Edge, Apple Safari, Brave, Tor, etc. This work studies the different parameters considered for privacy leakage; different methods used for evaluation both in normal browsing mode and in private browsing mode. The work proposed that the documentation given for the private mode for popular browsers is not complete, and also that users’ privacy, can be leaked.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save.

  • Get 10 units per month
  • Download Article/Chapter or Ebook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Available as EPUB and PDF
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

essay about web browser

An Intensive Analysis of Security and Privacy Browser Add-Ons

essay about web browser

Is Web Browsing Secure?

essay about web browser

Evaluating the Manageability of Web Browsers Controls

Tsalis, N., Mylonas, A., Nisioti, A., Gritzalis, D., Katos, V.: Exploring the protection of private browsing in desktop browsers. Comput. Secur. 67 , 181–197 (2017)

Article   Google Scholar  

Virvilis, N., Mylonas, A., Tsalis, N., Gritzalis, D.: Security busters: web browser security vs. rogue sites. Comput. Secur. 52 , 90–105 (2015)

Bouguettaya, A., Rezgui, A., Eltoweissy, M.: Privacy on the web: facts, challenges, and solutions. IEEE Secur. Priv. 1 (6), 40–49 (2003)

Mimecast: What is web security? (2021). https://www.mimecast.com/content/web-security/

Indian Computer Emergency Response Team (2021). https://www.cert-in.org.in/s2cMainServlet?pageid=PUBANULREPRT

The Hindu: More than 6.07 lakh cyber security incidents observed till June 2021: Government (2021). https://www.thehindu.com/business/cert-in-observed-more-than-607-lakh.-cyber-security-incidents-till-june-2021-government/article35726974.ece

Horsman, G., et al.: A forensic examination of web browser privacy-modes. Forensic Sci. Int. Rep. 1 , 100036 (2019)

Google Scholar  

Korniotakis, J., Papadopoulos, P., Markatos, E.P.: Beyond black and white: combining the benefits of regular and incognito browsing modes. In: ICETE, pp. 192–200 (2020)

Private browsing (2018). https://en.wikipedia.org/wiki/Private_browsing

CodeDocs: Private browsing (2021). https://codedocs.org/what-is/private-browsing

Kerschbaumer, C., Crouch, L., Ritter, T., Vyas, T.: Can we build a privacy-preserving web browser we all deserve? XRDS: crossroads. ACM Mag. Stud. 24 (4), 40–44 (2018)

Cookie Script: All you need to know about third-party cookies (2021). https://cookie-script.com/all-you-need-to-know-about-third-party-cookies.html

The end of third-party cookies and what to focus on now (2021). https://www.mightyroar.com/blog/third-party-cookies

Leith, D.J.: Web browser privacy: what do browsers say when they phone home? IEEE Access 9 , 41615–41627 (2021)

Jadoon, A.K., Iqbal, W., Amjad, M.F., Afzal, H., Bangash, Y.A.: Forensic analysis of Tor browser: a case study for privacy and anonymity on the web. Comput. Secur. 299 , 59–73 (2019)

Gabet, R.M., Seigfried-Spellar, K.C., Rogers, M.K.: A comparative forensic analysis of privacy enhanced web browsers and private browsing modes of common web browsers. Int. J. Electron. Secur. Digit. Forensics 10 (4), 356–371 (2018)

Cozza, F., et al.: Hybrid and lightweight detection of third party tracking: design, implementation, and evaluation. Comput. Netw. 167 , 106993 (2020)

Wu, Q., Liu, Q., Zhang, Y., Wen, G.: TrackerDetector: a system to detect third-party trackers through machine learning. Comput. Netw. 91 , 164–173 (2015)

Google safe browsing (2021). https://en.wikipedia.org/wiki/Google_Safe_Browsing

Kim, H., Kim, I.S., Kim, K.: AIBFT: artificial intelligence browser forensic toolkit. Forensic Sci. Int. Digit. Invest. 36 , 301091 (2021)

StatCounter GlobalStats: Desktop browser market share worldwide (2021). https://gs.statcounter.com/browser-market-share/desktop/worldwide

Brave (web browser) (2022). https://en.wikipedia.org/wiki/Brave_(web_browser)

Cookie policy - Arad Group (2021). https://arad.co.il/app/uploads/Cookie-policy.pdf

Malandrino, D., Scarano, V.: Privacy leakage on the web: diffusion and countermeasures. Comput. Secur. 57 (14), 2833–2855 (2013)

Mazel, J., Garnier, R., Fukuda, K.: A comparison of web privacy protection techniques. Comput. Commun. 144 , 162–174 (2019)

CookiePro: What’s the difference between first and third-party cookies? (2021). https://www.cookiepro.com/knowledge/whats-the-difference-between-first-and.-third-party-cookies/

Download references

Author information

Authors and affiliations.

Department of Mathematical and Computational Sciences, NIT - K Surathkal, Karnataka, India

R. Madhusudhan & Saurabh V. Surashe

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to R. Madhusudhan .

Editor information

Editors and affiliations.

Department of Information and Communication Engineering, Fukuoka Institute of Technology, Fukuoka, Japan

Leonard Barolli

University of Technology Sydney, Sydney, NSW, Australia

Farookh Hussain

Faculty of Bussiness Administration, Rissho University, Tokyo, Japan

Tomoya Enokido

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Cite this paper.

Madhusudhan, R., Surashe, S.V. (2022). Privacy and Security Comparison of Web Browsers: A Review. In: Barolli, L., Hussain, F., Enokido, T. (eds) Advanced Information Networking and Applications. AINA 2022. Lecture Notes in Networks and Systems, vol 451. Springer, Cham. https://doi.org/10.1007/978-3-030-99619-2_44

Download citation

DOI : https://doi.org/10.1007/978-3-030-99619-2_44

Published : 31 March 2022

Publisher Name : Springer, Cham

Print ISBN : 978-3-030-99618-5

Online ISBN : 978-3-030-99619-2

eBook Packages : Intelligent Technologies and Robotics Intelligent Technologies and Robotics (R0)

Share this paper

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research
  • Visual Story

The Hindu Logo

  • Entertainment
  • Life & Style

essay about web browser

To enjoy additional benefits

CONNECT WITH US

Whatsapp

How do web browsers work? | Explained Premium

Web browsers translate code into the dynamic web pages that form the backbone of our online experiences..

Updated - December 12, 2023 11:40 am IST

Published - December 11, 2023 04:02 pm IST

The icons of popular web browsers arranged on a screen (L-R): Edge, Firefox, Chrome, Opera, and Brave.

The icons of popular web browsers arranged on a screen (L-R): Edge, Firefox, Chrome, Opera, and Brave. | Photo Credit: Denny Müller/Unsplash

Web browsers are our digital passports to the vast universe of the internet. Their simplicity is deceptive: beneath their user-friendly interfaces lies a world of intricate processes that transform clicks into the web pages we interact with every day. In this edition of ‘Building Blocks’, let’s unravel the inner workings of web browsers, shedding light on the mechanisms that power our online experiences.

What are web browsers?

Fundamentally, the browser is an application that people use to send and receive messages via the internet. In other words, the browser is a program that runs on your device, with its purpose being to fetch information in different formats from the internet and show it on the device.

It also does the reverse, receiving your input (say, a click), translating it to code, and transmitting it to some other machine across the internet.

How were browsers born?

Let’s take a step back in time. In the early 1990s, the internet was a fledgling entity, largely text-based and navigated through little pieces of code typed out and transmitted to machines somewhere else, waiting for them to respond.

Then, in 1990, the English computer scientist Tim Berners-Lee introduced the concept of the World Wide Web, and with it came the first web browser, also named ‘WorldWideWeb’. It didn’t just display web pages; it also allowed users to edit them.

The next watershed moment was the debut of the Mosaic browser in 1993. Developed by a team at the U.S. National Center for Supercomputing Applications, it introduced the concept of displaying images alongside text, revolutionising the way we interacted with the web. This is when the internet became visually engaging.

Netscape Navigator burst onto the scene a year later and rapidly became the most popular browser of its time. It brought feature innovations like bookmarks and a user-friendly URL bar, simplifying navigation and making the web more accessible.

Then, the late 1990s witnessed a fierce battle among browser developers – a period dubbed the ‘Browser Wars’. Microsoft’s Internet Explorer (IE) and Netscape Navigator were the primary contenders. This competition spurred rapid innovation, with each browser striving to outperform the other in terms of speed, features, and compatibility.

By 2000, IE had emerged as the dominant browser, due in large part to its integration with the Windows operating system. But this period of monopoly also caused development and innovation to stagnate.

This monotony was broken in 2004 or 2005, in the next turning point in the history of browsers: the emergence of Mozilla Firefox. Developed by a community of volunteers and based on open-source principles, Firefox introduced ground-breaking new features like tabbed browsing and pop-up blocking, and also allowed users to ‘extend’ their personal browsers with add-ons.

Firefox’s arrival reinvigorated competition and set new standards for user-friendly browsing.

In 2008, Google launched Chrome, which swiftly gained in popularity for its speed and minimalist design. Like browsers past, Chrome’s success also revitalised the browser market and encouraged innovation across the board.

Other browsers, like Mozilla Firefox, Apple’s Safari, and Microsoft Edge (as a successor to Internet Explorer) also evolved, providing users with a range of choices tailored to their preferences.

What’s inside a browser?

Modern web browsers have multiple core components, each of which is a complex technology in itself. They also rely on several others, plus standards that say how the internet should work.

1. Request and response – When you enter a website’s address (in the form of the Uniform Resource Locator, or URL) into your browser’s address bar or when you click a link, you set in motion a sequence of digital communication. The browser sends a request to a server, asking for the contents of the specific web page you’re interested in.

This request travels through a network of servers, like dispatching a letter through a series of post offices. Upon reaching the server, the request is received and processed. The server then formulates a response containing the information (or data) required to construct the web page. This response embarks on its journey back to your browser, carrying the digital blueprint for the page you requested.

2. Deconstructing the response – The response from the server is not a singular entity. Instead, it is an amalgam of various files. Typically, these files have information encoded in three languages:  HTML, CSS, and JavaScript. Each set of information plays a pivotal role in shaping the final presentation of the web page.

HTML, short for Hypertext Markup Language, provides the architectural blueprint of a webpage. Similar to the skeletal framework of a building, made with iron bars, bricks, and cement, HTML defines the structure of the page, outlining elements like headings, paragraphs, images, and links. As the cornerstone of web content, HTML is the foundation on which browsers construct the visual layout.

Imagine CSS, or Cascading Style Sheets, to be the interior designer of the digital world. This information imparts style and aesthetics to the HTML structure by controlling attributes like colour schemes, fonts, spacing, and positioning. CSS ensures that the webpage comes into its unique visual identity.

JavaScript is the dynamic engine, making web pages interactive and responsive. Analogous to the electrical system in a building, JavaScript breathes life into static content. It allows interactive elements like pop-ups, forms, animations, and real-time updates, creating an engaging user experience. It also allows the browser to run some scripts and perform simple tasks on a page instead of waiting to receive instructions from a distant server.

3. Rendering – With HTML, CSS, and JavaScript in hand, a browser begins the process of rendering. This involves deciphering the HTML to understand the structural arrangement, applying CSS for stylistic finesse, and executing JavaScript to infuse interactivity. (You can deconstruct the final result on a webpage by right-clicking on the page and selecting ‘Inspect’.)

This process is remarkably swift, assembling the final webpage and presenting it to the user in a cohesive and visually appealing manner in much less than a second, depending on the amount of data. Rendering engines are in themselves a key piece of technology that enable screens to display graphics.

4. Managing data – Browsers serve as adept custodians of your digital footprint, so they also implement instruments like cookies and cache to enhance your online experience.

Cookies are small snippets of data stored on your computer by websites you visit. Think of them as digital post-it notes. They retain information such as login status, site preferences, and shopping cart contents. This allows you to navigate seamlessly, without having to re-login to a site when you close and reopen it in a short span of time.

Comparable to short-term memory, the cache is a repository of frequently accessed files. When you revisit a webpage, the browser checks its cache to see if it already has a copy of the required files. If so, it retrieves them from the cache rather than re-downloading them from the server. This accelerates page loading times and conserves bandwidth.

5. Security – Web browsers are also sentinels that guard your digital sanctuary. They use an array of security measures to protect your data as they fly between your computer to various servers, via the internet, and even when they’re stored on your computer itself. They do this by using encryption protocols, such as HTTPS, to create secure ‘tunnels’ for data exchange shielding the information from prying eyes. Browsers also use warning systems to alert you about potentially malicious websites, preventing inadvertent exposure to threats.

What next for browsing?

As technology hurtles forward, web browsers evolve in tandem. They are embracing cutting-edge technologies like WebAssembly, a format that enables near-native performance within the browser environment. Support for virtual reality (VR) and augmented reality (AR) experiences is also on the horizon, promising immersive online interactions. Additionally, privacy features are being bolstered, providing users with greater control over their digital footprint.

In sum, web browsers are the unsung heroes of our digital endeavours, translating code into the dynamic web pages that form the backbone of our online experiences. By unravelling the intricate tapestry of processes that underlie their operation, we gain a newfound appreciation for the seamless magic they conjure with every click.

The next time you open your browser, remember that behind the scenes, a symphony of digital choreography is unfolding to bring you the online world to your fingertips. Happy browsing!

Varun Vohra is a co-founder of Vaaree.com, a curated marketplace for home products, and a developer and tech-entrepreneur with 14 years of experience.

Related Topics

The Hindu Explains / internet / online / world wide web

Series - 27 stories

essay about web browser

Top News Today

  • Access 10 free stories every month
  • Save stories to read later
  • Access to comment on every story
  • Sign-up/manage your newsletter subscriptions with a single click
  • Get notified by email for early access to discounts & offers on our products

Terms & conditions   |   Institutional Subscriber

Comments have to be in English, and in full sentences. They cannot be abusive or personal. Please abide by our community guidelines for posting your comments.

We have migrated to a new commenting platform. If you are already a registered user of The Hindu and logged in, you may continue to engage with our articles. If you do not have an account please register and login to post comments. Users can access their older comments by logging into their accounts on Vuukle.

  • Skip to main content
  • Skip to search
  • Skip to select language
  • Sign up for free
  • Português (do Brasil)

How the web works

  • Overview: Getting started with the web

How the web works provides a simplified view of what happens when you view a webpage in a web browser on your computer or phone.

This theory is not essential to writing web code in the short term, but before long you'll really start to benefit from understanding what's happening in the background.

Clients and servers

Computers connected to the internet are called clients and servers . A simplified diagram of how they interact might look like this:

Two circles representing client and server. An arrow labelled request is going from client to server, and an arrow labelled responses is going from server to client

  • Clients are the typical web user's internet-connected devices (for example, your computer connected to your Wi-Fi, or your phone connected to your mobile network) and web-accessing software available on those devices (usually a web browser like Firefox or Chrome).
  • Servers are computers that store webpages, sites, or apps. When a client device wants to access a webpage, a copy of the webpage is downloaded from the server onto the client machine to be displayed in the user's web browser.

The other parts of the toolbox

The client and server we've described above don't tell the whole story. There are many other parts involved, and we'll describe them below.

For now, let's imagine that the web is a road. On one end of the road is the client, which is like your house. On the other end of the road is the server, which is a shop you want to buy something from.

A black-and-white photo of a person crossing a road at a crosswalk

In addition to the client and the server, we also need to say hello to:

  • Your internet connection : Allows you to send and receive data on the web. It's basically like the street between your house and the shop.
  • TCP/IP : Transmission Control Protocol and Internet Protocol are communication protocols that define how data should travel across the internet. This is like the transport mechanisms that let you place an order, go to the shop, and buy your goods. In our example, this is like a car or a bike (or however else you might get around).
  • DNS : Domain Name System is like an address book for websites. When you type a web address in your browser, the browser looks at the DNS to find the website's IP address before it can retrieve the website. The browser needs to find out which server the website lives on, so it can send HTTP messages to the right place (see below). This is like looking up the address of the shop so you can access it.
  • HTTP : Hypertext Transfer Protocol is an application protocol that defines a language for clients and servers to speak to each other. This is like the language you use to order your goods.
  • Code files : Websites are built primarily from HTML, CSS, and JavaScript, though you'll meet other technologies a bit later.
  • Assets : This is a collective name for all the other stuff that makes up a website, such as images, music, video, Word documents, and PDFs.

So what happens, exactly?

When you type a web address into your browser (for our analogy that's like walking to the shop):

  • The browser goes to the DNS server, and finds the real address of the server that the website lives on (you find the address of the shop).
  • The browser sends an HTTP request message to the server, asking it to send a copy of the website to the client (you go to the shop and order your goods). This message, and all other data sent between the client and the server, is sent across your internet connection using TCP/IP.
  • If the server approves the client's request, the server sends the client a "200 OK" message, which means "Of course you can look at that website! Here it is", and then starts sending the website's files to the browser as a series of small chunks called data packets (the shop gives you your goods, and you bring them back to your house).
  • The browser assembles the small chunks into a complete web page and displays it to you (the goods arrive at your door — new shiny stuff, awesome!).

Order in which component files are parsed

When browsers send requests to servers for HTML files, those HTML files often contain <link> elements referencing external CSS stylesheets and <script> elements referencing external JavaScript scripts. It's important to know the order in which those files are parsed by the browser as the browser loads the page:

  • The browser parses the HTML file first, and that leads to the browser recognizing any <link> -element references to external CSS stylesheets and any <script> -element references to scripts.
  • As the browser parses the HTML, it sends requests back to the server for any CSS files it has found from <link> elements, and any JavaScript files it has found from <script> elements, and from those, then parses the CSS and JavaScript.
  • The browser generates an in-memory DOM tree from the parsed HTML, generates an in-memory CSSOM structure from the parsed CSS, and compiles and executes the parsed JavaScript.
  • As the browser builds the DOM tree and applies the styles from the CSSOM tree and executes the JavaScript, a visual representation of the page is painted to the screen, and the user sees the page content and can begin to interact with it.

DNS explained

Real web addresses aren't the nice, memorable strings you type into your address bar to find your favorite websites. They are special numbers that look like this: 192.0.2.172 .

This is called an IP address , and it represents a unique location on the web. However, it's not very easy to remember, is it? That's why the Domain Name System was invented. This system uses special servers that match up a web address you type into your browser (like "mozilla.org") to the website's real (IP) address.

Websites can be reached directly via their IP addresses. You can use a DNS lookup tool to find the IP address of a website.

Packets explained

Earlier we used the term "packets" to describe the format in which the data is transferred between the client and server. What do we mean here? Basically, when data is sent across the web, it is sent in thousands of small chunks. There are multiple reasons why data is sent in small packets. They are sometimes dropped or corrupted, and it's easier to replace small chunks when this happens. Additionally, the packets can be routed along different paths, making the exchange faster and allowing many different users to download the same website at the same time. If each website was sent as a single big chunk, only one user could download it at a time, which obviously would make the web very inefficient and not much fun to use.

  • How the Internet works
  • HTTP — an Application-Level Protocol
  • HTTP: Let's GET It On!
  • HTTP: Response Codes

Street photo: Street composing , by kevin digga .

  • Español – América Latina
  • Português – Brasil
  • Tiếng Việt

How browsers work

Behind the scenes of modern web browsers

Paul Irish

This comprehensive primer on the internal operations of WebKit and Gecko is the result of much research done by Israeli developer Tali Garsiel. Over a few years, she reviewed all the published data about browser internals and spent a lot of time reading web browser source code. She wrote:

As a web developer, learning the internals of browser operations helps you make better decisions and know the justifications behind development best practices . While this is a rather lengthy document, we recommend you spend some time digging in. You'll be glad you did. Paul Irish, Chrome Developer Relations

Introduction

Web browsers are the most widely used software. In this primer, I explain how they work behind the scenes. We will see what happens when you type google.com in the address bar until you see the Google page on the browser screen.

Browsers we'll talk about

There are five major browsers used on desktop today: Chrome, Internet Explorer, Firefox, Safari and Opera. On mobile, the main browsers are Android Browser, iPhone, Opera Mini and Opera Mobile, UC Browser, the Nokia S40/S60 browsers and Chrome, all of which, except for the Opera browsers, are based on WebKit. I will give examples from the open source browsers Firefox and Chrome, and Safari (which is partly open source). According to StatCounter statistics (as of June 2013) Chrome, Firefox and Safari make up around 71% of global desktop browser usage. On mobile, Android Browser, iPhone and Chrome constitute around 54% of usage.

The browser's main functionality

The main function of a browser is to present the web resource you choose, by requesting it from the server and displaying it in the browser window. The resource is usually an HTML document, but may also be a PDF, image, or some other type of content. The location of the resource is specified by the user using a URI (Uniform Resource Identifier).

The way the browser interprets and displays HTML files is specified in the HTML and CSS specifications. These specifications are maintained by the W3C (World Wide Web Consortium) organization, which is the standards organization for the web. For years browsers conformed to only a part of the specifications and developed their own extensions. That caused serious compatibility issues for web authors. Today most of the browsers more or less conform to the specifications.

Browser user interfaces have a lot in common with each other. Among the common user interface elements are:

  • Address bar for inserting a URI
  • Back and forward buttons
  • Bookmarking options
  • Refresh and stop buttons for refreshing or stopping the loading of current documents
  • Home button that takes you to your home page

Strangely enough, the browser's user interface is not specified in any formal specification, it just comes from good practices shaped over years of experience and by browsers imitating each other. The HTML5 specification doesn't define UI elements a browser must have, but lists some common elements. Among those are the address bar, status bar and tool bar. There are, of course, features unique to a specific browser like Firefox's downloads manager.

High-level infrastructure

The browser's main components are:

  • The user interface : this includes the address bar, back/forward button, bookmarking menu, etc. Every part of the browser display except the window where you see the requested page.
  • The browser engine : marshals actions between the UI and the rendering engine.
  • The rendering engine : responsible for displaying requested content. For example if the requested content is HTML, the rendering engine parses HTML and CSS, and displays the parsed content on the screen.
  • Networking : for network calls such as HTTP requests, using different implementations for different platform behind a platform-independent interface.
  • UI backend : used for drawing basic widgets like combo boxes and windows. This backend exposes a generic interface that is not platform specific. Underneath it uses operating system user interface methods.
  • JavaScript interpreter . Used to parse and execute JavaScript code.
  • Data storage . This is a persistence layer. The browser may need to save all sorts of data locally, such as cookies. Browsers also support storage mechanisms such as localStorage, IndexedDB, WebSQL and FileSystem.

Browser components

It is important to note that browsers such as Chrome run multiple instances of the rendering engine: one for each tab. Each tab runs in a separate process.

Rendering engines

The responsibility of the rendering engine is well… Rendering, that is display of the requested contents on the browser screen.

By default the rendering engine can display HTML and XML documents and images. It can display other types of data via plug-ins or extension; for example, displaying PDF documents using a PDF viewer plug-in. However, in this chapter we will focus on the main use case: displaying HTML and images that are formatted using CSS.

Different browsers use different rendering engines: Internet Explorer uses Trident, Firefox uses Gecko, Safari uses WebKit. Chrome and Opera (from version 15) use Blink, a fork of WebKit.

WebKit is an open source rendering engine which started as an engine for the Linux platform and was modified by Apple to support Mac and Windows.

The main flow

The rendering engine will start getting the contents of the requested document from the networking layer. This will usually be done in 8kB chunks.

After that, this is the basic flow of the rendering engine:

Rendering engine basic flow

The rendering engine will start parsing the HTML document and convert elements to DOM nodes in a tree called the "content tree". The engine will parse the style data, both in external CSS files and in style elements. Styling information together with visual instructions in the HTML will be used to create another tree: the render tree .

The render tree contains rectangles with visual attributes like color and dimensions. The rectangles are in the right order to be displayed on the screen.

After the construction of the render tree it goes through a " layout " process. This means giving each node the exact coordinates where it should appear on the screen. The next stage is painting - the render tree will be traversed and each node will be painted using the UI backend layer.

It's important to understand that this is a gradual process. For better user experience, the rendering engine will try to display contents on the screen as soon as possible. It won't wait until all HTML is parsed before starting to build and layout the render tree. Parts of the content will be parsed and displayed, while the process continues with the rest of the contents that keeps coming from the network.

Main flow examples

WebKit main flow.

From figures 3 and 4 you can see that although WebKit and Gecko use slightly different terminology, the flow is basically the same.

Gecko calls the tree of visually formatted elements a "Frame tree". Each element is a frame. WebKit uses the term "Render Tree" and it consists of "Render Objects". WebKit uses the term "layout" for the placing of elements, while Gecko calls it "Reflow". "Attachment" is WebKit's term for connecting DOM nodes and visual information to create the render tree. A minor non-semantic difference is that Gecko has an extra layer between the HTML and the DOM tree. It is called the "content sink" and is a factory for making DOM elements. We will talk about each part of the flow:

Parsing - general

Since parsing is a very significant process within the rendering engine, we will go into it a little more deeply. Let's begin with a little introduction about parsing.

Parsing a document means translating it to a structure the code can use. The result of parsing is usually a tree of nodes that represent the structure of the document. This is called a parse tree or a syntax tree.

For example, parsing the expression 2 + 3 - 1 could return this tree:

Mathematical expression tree node.

Parsing is based on the syntax rules the document obeys: the language or format it was written in. Every format you can parse must have deterministic grammar consisting of vocabulary and syntax rules. It is called a context free grammar . Human languages are not such languages and therefore cannot be parsed with conventional parsing techniques.

Parser - Lexer combination

Parsing can be separated into two sub processes: lexical analysis and syntax analysis.

Lexical analysis is the process of breaking the input into tokens. Tokens are the language vocabulary: the collection of valid building blocks. In human language it will consist of all the words that appear in the dictionary for that language.

Syntax analysis is the applying of the language syntax rules.

Parsers usually divide the work between two components: the lexer (sometimes called tokenizer) that is responsible for breaking the input into valid tokens, and the parser that is responsible for constructing the parse tree by analyzing the document structure according to the language syntax rules.

The lexer knows how to strip irrelevant characters like white spaces and line breaks.

From source document to parse trees

The parsing process is iterative. The parser will usually ask the lexer for a new token and try to match the token with one of the syntax rules. If a rule is matched, a node corresponding to the token will be added to the parse tree and the parser will ask for another token.

If no rule matches, the parser will store the token internally, and keep asking for tokens until a rule matching all the internally stored tokens is found. If no rule is found then the parser will raise an exception. This means the document was not valid and contained syntax errors.

Translation

In many cases the parse tree is not the final product. Parsing is often used in translation: transforming the input document to another format. An example is compilation. The compiler that compiles source code into machine code first parses it into a parse tree and then translates the tree into a machine code document.

Compilation flow

Parsing example

In figure 5 we built a parse tree from a mathematical expression. Let's try to define a simple mathematical language and see the parse process.

  • The language syntax building blocks are expressions, terms and operations.
  • Our language can include any number of expressions.
  • An expression is defined as a "term" followed by an "operation" followed by another term
  • An operation is a plus token or a minus token
  • A term is an integer token or an expression

Let's analyze the input 2 + 3 - 1 .

The first substring that matches a rule is 2 : according to rule #5 it is a term. The second match is 2 + 3 : this matches the third rule: a term followed by an operation followed by another term. The next match will only be hit at the end of the input. 2 + 3 - 1 is an expression because we already know that 2 + 3 is a term, so we have a term followed by an operation followed by another term. 2 + + won't match any rule and therefore is an invalid input.

Formal definitions for vocabulary and syntax

Vocabulary is usually expressed by regular expressions .

For example our language will be defined as:

As you see, integers are defined by a regular expression.

Syntax is usually defined in a format called BNF . Our language will be defined as:

We said that a language can be parsed by regular parsers if its grammar is a context free grammar. An intuitive definition of a context free grammar is a grammar that can be entirely expressed in BNF. For a formal definition see Wikipedia's article on Context-free grammar

Types of parsers

There are two types of parsers: top down parsers and bottom up parsers. An intuitive explanation is that top down parsers examine the high level structure of the syntax and try to find a rule match. Bottom up parsers start with the input and gradually transform it into the syntax rules, starting from the low level rules until high level rules are met.

Let's see how the two types of parsers will parse our example.

The top down parser will start from the higher level rule: it will identify 2 + 3 as an expression. It will then identify 2 + 3 - 1 as an expression (the process of identifying the expression evolves, matching the other rules, but the start point is the highest level rule).

The bottom up parser will scan the input until a rule is matched. It will then replace the matching input with the rule. This will go on until the end of the input. The partly matched expression is placed on the parser's stack.

Stack Input
2 + 3 - 1
term + 3 - 1
term operation 3 - 1
expression - 1
expression operation 1
expression -

This type of bottom up parser is called a shift-reduce parser, because the input is shifted to the right (imagine a pointer pointing first at the input start and moving to the right) and is gradually reduced to syntax rules.

Generating parsers automatically

There are tools that can generate a parser. You feed them the grammar of your language - its vocabulary and syntax rules - and they generate a working parser. Creating a parser requires a deep understanding of parsing and it's not easy to create an optimized parser by hand, so parser generators can be very useful.

WebKit uses two well known parser generators: Flex for creating a lexer and Bison for creating a parser (you might run into them with the names Lex and Yacc). Flex input is a file containing regular expression definitions of the tokens. Bison's input is the language syntax rules in BNF format.

HTML Parser

The job of the HTML parser is to parse the HTML markup into a parse tree.

HTML grammar

The vocabulary and syntax of HTML are defined in specifications created by the W3C organization.

As we have seen in the parsing introduction, grammar syntax can be defined formally using formats like BNF.

Unfortunately all the conventional parser topics don't apply to HTML (I didn't bring them up just for fun - they will be used in parsing CSS and JavaScript). HTML cannot easily be defined by a context free grammar that parsers need.

There is a formal format for defining HTML - DTD (Document Type Definition) - but it is not a context free grammar.

This appears strange at first sight; HTML is rather close to XML. There are lots of available XML parsers. There is an XML variation of HTML - XHTML - so what's the big difference?

The difference is that the HTML approach is more "forgiving": it lets you omit certain tags (which are then added implicitly), or sometimes omit start or end tags, and so on. On the whole it's a "soft" syntax, as opposed to XML's stiff and demanding syntax.

This seemingly small detail makes a world of a difference. On one hand this is the main reason why HTML is so popular: it forgives your mistakes and makes life easy for the web author. On the other hand, it makes it difficult to write a formal grammar. So to summarize, HTML cannot be parsed easily by conventional parsers, since its grammar is not context free. HTML cannot be parsed by XML parsers.

HTML definition is in a DTD format. This format is used to define languages of the SGML family. The format contains definitions for all allowed elements, their attributes and hierarchy. As we saw earlier, the HTML DTD doesn't form a context free grammar.

There are a few variations of the DTD. The strict mode conforms solely to the specifications but other modes contain support for markup used by browsers in the past. The purpose is backwards compatibility with older content. The current strict DTD is here: www.w3.org/TR/html4/strict.dtd

The output tree (the "parse tree") is a tree of DOM element and attribute nodes. DOM is short for Document Object Model. It is the object presentation of the HTML document and the interface of HTML elements to the outside world like JavaScript.

The root of the tree is the " Document " object.

The DOM has an almost one-to-one relation to the markup. For example:

This markup would be translated to the following DOM tree:

DOM tree of the example markup

Like HTML, DOM is specified by the W3C organization. See www.w3.org/DOM/DOMTR . It is a generic specification for manipulating documents. A specific module describes HTML specific elements. The HTML definitions can be found here: www.w3.org/TR/2003/REC-DOM-Level-2-HTML-20030109/idl-definitions.html .

When I say the tree contains DOM nodes, I mean the tree is constructed of elements that implement one of the DOM interfaces. Browsers use concrete implementations that have other attributes used by the browser internally.

The parsing algorithm

As we saw in the previous sections, HTML cannot be parsed using the regular top down or bottom up parsers.

The reasons are:

  • The forgiving nature of the language.
  • The fact that browsers have traditional error tolerance to support well known cases of invalid HTML.
  • The parsing process is reentrant. For other languages, the source doesn't change during parsing, but in HTML, dynamic code (such as script elements containing document.write() calls) can add extra tokens, so the parsing process actually modifies the input.

Unable to use the regular parsing techniques, browsers create custom parsers for parsing HTML.

The parsing algorithm is described in detail by the HTML5 specification . The algorithm consists of two stages: tokenization and tree construction.

Tokenization is the lexical analysis, parsing the input into tokens. Among HTML tokens are start tags, end tags, attribute names and attribute values.

The tokenizer recognizes the token, gives it to the tree constructor, and consumes the next character for recognizing the next token, and so on until the end of the input.

HTML parsing flow (taken from HTML5 spec)

The tokenization algorithm

The algorithm's output is an HTML token. The algorithm is expressed as a state machine. Each state consumes one or more characters of the input stream and updates the next state according to those characters. The decision is influenced by the current tokenization state and by the tree construction state. This means the same consumed character will yield different results for the correct next state, depending on the current state. The algorithm is too complex to describe fully, so let's see a simple example that will help us understand the principle.

Basic example - tokenizing the following HTML:

The initial state is the "Data state". When the < character is encountered, the state is changed to "Tag open state" . Consuming an a-z character causes creation of a "Start tag token", the state is changed to "Tag name state" . We stay in this state until the > character is consumed. Each character is appended to the new token name. In our case the created token is an html token.

When the > tag is reached, the current token is emitted and the state changes back to the "Data state" . The <body> tag will be treated by the same steps. So far the html and body tags were emitted. We are now back at the "Data state" . Consuming the H character of Hello world will cause creation and emitting of a character token, this goes on until the < of </body> is reached. We will emit a character token for each character of Hello world .

We are now back at the "Tag open state" . Consuming the next input / will cause creation of an end tag token and a move to the "Tag name state" . Again we stay in this state until we reach > .Then the new tag token will be emitted and we go back to the "Data state" . The </html> input will be treated like the previous case.

Tokenizing the example input

Tree construction algorithm

When the parser is created the Document object is created. During the tree construction stage the DOM tree with the Document in its root will be modified and elements will be added to it. Each node emitted by the tokenizer will be processed by the tree constructor. For each token the specification defines which DOM element is relevant to it and will be created for this token. The element is added to the DOM tree, and also the stack of open elements. This stack is used to correct nesting mismatches and unclosed tags. The algorithm is also described as a state machine. The states are called "insertion modes".

Let's see the tree construction process for the example input:

The input to the tree construction stage is a sequence of tokens from the tokenization stage. The first mode is the "initial mode" . Receiving the "html" token will cause a move to the "before html" mode and a reprocessing of the token in that mode. This will cause creation of the HTMLHtmlElement element, which will be appended to the root Document object.

The state will be changed to "before head" . The "body" token is then received. An HTMLHeadElement will be created implicitly although we don't have a "head" token and it will be added to the tree.

We now move to the "in head" mode and then to "after head" . The body token is reprocessed, an HTMLBodyElement is created and inserted and the mode is transferred to "in body" .

The character tokens of the "Hello world" string are now received. The first one will cause creation and insertion of a "Text" node and the other characters will be appended to that node.

The receiving of the body end token will cause a transfer to "after body" mode. We will now receive the html end tag which will move us to "after after body" mode. Receiving the end of file token will end the parsing.

Actions when the parsing is finished

At this stage the browser will mark the document as interactive and start parsing scripts that are in "deferred" mode: those that should be executed after the document is parsed. The document state will be then set to "complete" and a "load" event will be fired.

You can see the full algorithms for tokenization and tree construction in the HTML5 specification .

Browsers' error tolerance

You never get an "Invalid Syntax" error on an HTML page. Browsers fix any invalid content and go on.

Take this HTML for example:

I must have violated about a million rules ("mytag" is not a standard tag, wrong nesting of the "p" and "div" elements and more) but the browser still shows it correctly and doesn't complain. So a lot of the parser code is fixing the HTML author mistakes.

Error handling is quite consistent in browsers, but amazingly enough it hasn't been part of HTML specifications. Like bookmarking and back/forward buttons it's just something that developed in browsers over the years. There are known invalid HTML constructs repeated on many sites, and the browsers try to fix them in a way conformant with other browsers.

The HTML5 specification does define some of these requirements. (WebKit summarizes this nicely in the comment at the beginning of the HTML parser class.)

The parser parses tokenized input into the document, building up the document tree. If the document is well-formed, parsing it is straightforward.

Unfortunately, we have to handle many HTML documents that are not well-formed, so the parser has to be tolerant about errors.

We have to take care of at least the following error conditions:

  • The element being added is explicitly forbidden inside some outer tag. In this case we should close all tags up to the one which forbids the element, and add it afterwards.
  • We are not allowed to add the element directly. It could be that the person writing the document forgot some tag in between (or that the tag in between is optional). This could be the case with the following tags: HTML HEAD BODY TBODY TR TD LI (did I forget any?).
  • We want to add a block element inside an inline element. Close all inline elements up to the next higher block element.
  • If this doesn't help, close elements until we are allowed to add the element - or ignore the tag.

Let's see some WebKit error tolerance examples:

</br> instead of <br>

Some sites use </br> instead of <br> . In order to be compatible with IE and Firefox, WebKit treats this like <br> .

Note that the error handling is internal: it won't be presented to the user.

A stray table

A stray table is a table inside another table, but not inside a table cell.

For example:

WebKit will change the hierarchy to two sibling tables:

WebKit uses a stack for the current element contents: it will pop the inner table out of the outer table stack. The tables will now be siblings.

Nested form elements

In case the user puts a form inside another form, the second form is ignored.

A too deep tag hierarchy

The comment speaks for itself.

Misplaced html or body end tags

Again - the comment speaks for itself.

So web authors beware - unless you want to appear as an example in a WebKit error tolerance code snippet - write well formed HTML.

CSS parsing

Remember the parsing concepts in the introduction? Well, unlike HTML, CSS is a context free grammar and can be parsed using the types of parsers described in the introduction. In fact the CSS specification defines CSS lexical and syntax grammar .

Let's see some examples:

The lexical grammar (vocabulary) is defined by regular expressions for each token:

"ident" is short for identifier, like a class name. "name" is an element id (that is referred by "#" )

The syntax grammar is described in BNF.

Explanation:

A ruleset is this structure:

div.error and a.error are selectors. The part inside the curly braces contains the rules that are applied by this ruleset. This structure is defined formally in this definition:

This means a ruleset is a selector or optionally a number of selectors separated by a comma and spaces (S stands for white space). A ruleset contains curly braces and inside them a declaration or optionally a number of declarations separated by a semicolon. "declaration" and "selector" will be defined in the following BNF definitions.

WebKit CSS parser

WebKit uses Flex and Bison parser generators to create parsers automatically from the CSS grammar files. As you recall from the parser introduction, Bison creates a bottom up shift-reduce parser. Firefox uses a top down parser written manually. In both cases each CSS file is parsed into a StyleSheet object. Each object contains CSS rules. The CSS rule objects contain selector and declaration objects and other objects corresponding to CSS grammar.

Parsing CSS.

Processing order for scripts and style sheets

The model of the web is synchronous. Authors expect scripts to be parsed and executed immediately when the parser reaches a <script> tag. The parsing of the document halts until the script has been executed. If the script is external then the resource must first be fetched from the network - this is also done synchronously, and parsing halts until the resource is fetched. This was the model for many years and is also specified in HTML4 and 5 specifications. Authors can add the "defer" attribute to a script, in which case it won't halt document parsing and will execute after the document is parsed. HTML5 adds an option to mark the script as asynchronous so it will be parsed and executed by a different thread.

Speculative parsing

Both WebKit and Firefox do this optimization. While executing scripts, another thread parses the rest of the document and finds out what other resources need to be loaded from the network and loads them. In this way, resources can be loaded on parallel connections and overall speed is improved. Note: the speculative parser only parses references to external resources like external scripts, style sheets and images: it doesn't modify the DOM tree - that is left to the main parser.

Style sheets

Style sheets on the other hand have a different model. Conceptually it seems that since style sheets don't change the DOM tree, there is no reason to wait for them and stop the document parsing. There is an issue, though, of scripts asking for style information during the document parsing stage. If the style is not loaded and parsed yet, the script will get wrong answers and apparently this caused lots of problems. It seems to be an edge case but is quite common. Firefox blocks all scripts when there is a style sheet that is still being loaded and parsed. WebKit blocks scripts only when they try to access certain style properties that may be affected by unloaded style sheets.

Render tree construction

While the DOM tree is being constructed, the browser constructs another tree, the render tree. This tree is of visual elements in the order in which they will be displayed. It is the visual representation of the document. The purpose of this tree is to enable painting the contents in their correct order.

Firefox calls the elements in the render tree "frames". WebKit uses the term renderer or render object.

A renderer knows how to lay out and paint itself and its children.

WebKit's RenderObject class, the base class of the renderers, has the following definition:

Each renderer represents a rectangular area usually corresponding to a node's CSS box, as described by the CSS2 spec. It includes geometric information like width, height and position.

The box type is affected by the "display" value of the style attribute that is relevant to the node (see the style computation section). Here is WebKit code for deciding what type of renderer should be created for a DOM node, according to the display attribute:

The element type is also considered: for example, form controls and tables have special frames.

In WebKit if an element wants to create a special renderer, it will override the createRenderer() method. The renderers point to style objects that contains non geometric information.

The render tree relation to the DOM tree

The renderers correspond to DOM elements, but the relation is not one to one. Non-visual DOM elements won't be inserted in the render tree. An example is the "head" element. Also elements whose display value was assigned to "none" won't appear in the tree (whereas elements with "hidden" visibility will appear in the tree).

There are DOM elements which correspond to several visual objects. These are usually elements with complex structure that cannot be described by a single rectangle. For example, the "select" element has three renderers: one for the display area, one for the drop down list box and one for the button. Also when text is broken into multiple lines because the width is not sufficient for one line, the new lines will be added as extra renderers.

Another example of multiple renderers is broken HTML. According to the CSS spec an inline element must contain either only block elements or only inline elements. In the case of mixed content, anonymous block renderers will be created to wrap the inline elements.

Some render objects correspond to a DOM node but not in the same place in the tree. Floats and absolutely positioned elements are out of flow, placed in a different part of the tree, and mapped to the real frame. A placeholder frame is where they should have been.

The render tree and the corresponding DOM tree.

The flow of constructing the tree

In Firefox, the presentation is registered as a listener for DOM updates. The presentation delegates frame creation to the FrameConstructor and the constructor resolves style (see style computation ) and creates a frame.

In WebKit the process of resolving the style and creating a renderer is called "attachment". Every DOM node has an "attach" method. Attachment is synchronous, node insertion to the DOM tree calls the new node "attach" method.

Processing the html and body tags results in the construction of the render tree root. The root render object corresponds to what the CSS spec calls the containing block: the top most block that contains all other blocks. Its dimensions are the viewport: the browser window display area dimensions. Firefox calls it ViewPortFrame and WebKit calls it RenderView . This is the render object that the document points to. The rest of the tree is constructed as a DOM nodes insertion.

See the CSS2 spec on the processing model .

Style computation

Building the render tree requires calculating the visual properties of each render object. This is done by calculating the style properties of each element.

The style includes style sheets of various origins, inline style elements and visual properties in the HTML (like the "bgcolor" property).The later is translated to matching CSS style properties.

The origins of style sheets are the browser's default style sheets, the style sheets provided by the page author and user style sheets - these are style sheets provided by the browser user (browsers let you define your favorite styles. In Firefox, for instance, this is done by placing a style sheet in the "Firefox Profile" folder).

Style computation brings up a few difficulties:

  • Style data is a very large construct, holding the numerous style properties, this can cause memory problems.

Finding the matching rules for each element can cause performance issues if it's not optimized. Traversing the entire rule list for each element to find matches is a heavy task. Selectors can have complex structure that can cause the matching process to start on a seemingly promising path that is proven to be futile and another path has to be tried.

For example - this compound selector:

Means the rules apply to a <div> who is the descendant of 3 divs. Suppose you want to check if the rule applies for a given <div> element. You choose a certain path up the tree for checking. You may need to traverse the node tree up just to find out there are only two divs and the rule does not apply. You then need to try other paths in the tree.

Applying the rules involves quite complex cascade rules that define the hierarchy of the rules.

Let's see how the browsers face these issues:

Sharing style data

WebKit nodes references style objects (RenderStyle). These objects can be shared by nodes in some conditions. The nodes are siblings or cousins and:

  • The elements must be in the same mouse state (e.g., one can't be in :hover while the other isn't)
  • Neither element should have an id
  • The tag names should match
  • The class attributes should match
  • The set of mapped attributes must be identical
  • The link states must match
  • The focus states must match
  • Neither element should be affected by attribute selectors, where affected is defined as having any selector match that uses an attribute selector in any position within the selector at all
  • There must be no inline style attribute on the elements
  • There must be no sibling selectors in use at all. WebCore simply throws a global switch when any sibling selector is encountered and disables style sharing for the entire document when they are present. This includes the + selector and selectors like :first-child and :last-child.

Firefox rule tree

Firefox has two extra trees for easier style computation: the rule tree and style context tree. WebKit also has style objects but they are not stored in a tree like the style context tree, only the DOM node points to its relevant style.

Firefox style context tree.

The style contexts contain end values. The values are computed by applying all the matching rules in the correct order and performing manipulations that transform them from logical to concrete values. For example, if the logical value is a percentage of the screen it will be calculated and transformed to absolute units. The rule tree idea is really clever. It enables sharing these values between nodes to avoid computing them again. This also saves space.

All the matched rules are stored in a tree. The bottom nodes in a path have higher priority. The tree contains all the paths for rule matches that were found. Storing the rules is done lazily. The tree isn't calculated at the beginning for every node, but whenever a node style needs to be computed the computed paths are added to the tree.

The idea is to see the tree paths as words in a lexicon. Lets say we already computed this rule tree:

Computed rule tree

Suppose we need to match rules for another element in the content tree, and find out the matched rules (in the correct order) are B-E-I. We already have this path in the tree because we already computed path A-B-E-I-L. We will now have less work to do.

Let's see how the tree saves us work.

Division into structs

The style contexts are divided into structs. Those structs contain style information for a certain category like border or color. All the properties in a struct are either inherited or non inherited. Inherited properties are properties that unless defined by the element, are inherited from its parent. Non inherited properties (called "reset" properties) use default values if not defined.

The tree helps us by caching entire structs (containing the computed end values) in the tree. The idea is that if the bottom node didn't supply a definition for a struct, a cached struct in an upper node can be used.

Computing the style contexts using the rule tree

When computing the style context for a certain element, we first compute a path in the rule tree or use an existing one. We then begin to apply the rules in the path to fill the structs in our new style context. We start at the bottom node of the path - the one with the highest precedence (usually the most specific selector) and traverse the tree up until our struct is full. If there is no specification for the struct in that rule node, then we can greatly optimize - we go up the tree until we find a node that specifies it fully and point to it - that's the best optimization - the entire struct is shared. This saves computation of end values and memory.

If we find partial definitions we go up the tree until the struct is filled.

If we didn't find any definitions for our struct then, in case the struct is an "inherited" type, we point to the struct of our parent in the context tree . In this case we also succeeded in sharing structs. If it's a reset struct then default values will be used.

If the most specific node does add values then we need to do some extra calculations for transforming it to actual values. We then cache the result in the tree node so it can be used by children.

In case an element has a sibling or a brother that points to the same tree node then the entire style context can be shared between them.

Lets see an example: Suppose we have this HTML

And the following rules:

To simplify things let's say we need to fill out only two structs: the color struct and the margin struct. The color struct contains only one member: the color The margin struct contains the four sides.

The resulting rule tree will look like this (the nodes are marked with the node name: the number of the rule they point at):

The rule tree

The context tree will look like this (node name: rule node they point to):

The context tree.

Suppose we parse the HTML and get to the second <div> tag. We need to create a style context for this node and fill its style structs.

We will match the rules and discover that the matching rules for the <div> are 1, 2 and 6. This means there is already an existing path in the tree that our element can use and we just need to add another node to it for rule 6 (node F in the rule tree).

We will create a style context and put it in the context tree. The new style context will point to node F in the rule tree.

We now need to fill the style structs. We will begin by filling out the margin struct. Since the last rule node (F) doesn't add to the margin struct, we can go up the tree until we find a cached struct computed in a previous node insertion and use it. We will find it on node B, which is the uppermost node that specified margin rules.

We do have a definition for the color struct, so we can't use a cached struct. Since color has one attribute we don't need to go up the tree to fill other attributes. We will compute the end value (convert string to RGB etc) and cache the computed struct on this node.

The work on the second <span> element is even easier. We will match the rules and come to the conclusion that it points to rule G, like the previous span. Since we have siblings that point to the same node, we can share the entire style context and just point to the context of the previous span.

For structs that contain rules that are inherited from the parent, caching is done on the context tree (the color property is actually inherited, but Firefox treats it as reset and caches it on the rule tree).

For example, if we added rules for fonts in a paragraph:

Then the paragraph element, which is a child of the div in the context tree, could have shared the same font struct as his parent. This is if no font rules were specified for the paragraph.

In WebKit, who does not have a rule tree, the matched declarations are traversed four times. First non-important high priority properties are applied (properties that should be applied first because others depend on them, such as display), then high priority important, then normal priority non-important, then normal priority important rules. This means that properties that appear multiple times will be resolved according to the correct cascade order. The last wins.

So to summarize: sharing the style objects (entirely or some of the structs inside them) solves issues 1 and 3. The Firefox rule tree also helps in applying the properties in the correct order.

Manipulating the rules for an easy match

There are several sources for style rules:

  • CSS rules, either in external style sheets or in style elements. css p {color: blue}
  • Inline style attributes like html <p style="color: blue" />
  • HTML visual attributes (which are mapped to relevant style rules) html <p bgcolor="blue" /> The last two are easily matched to the element since he owns the style attributes and HTML attributes can be mapped using the element as the key.

As noted previously in issue #2, the CSS rule matching can be trickier. To solve the difficulty, the rules are manipulated for easier access.

After parsing the style sheet, the rules are added to one of several hash maps, according to the selector. There are maps by id, by class name, by tag name and a general map for anything that doesn't fit into those categories. If the selector is an id, the rule will be added to the id map, if it's a class it will be added to the class map etc.

This manipulation makes it much easier to match rules. There is no need to look in every declaration: we can extract the relevant rules for an element from the maps. This optimization eliminates 95+% of the rules, so that they need not even be considered during the matching process(4.1).

Let's see for example the following style rules:

The first rule will be inserted into the class map. The second into the id map and the third into the tag map.

For the following HTML fragment;

We will first try to find rules for the p element. The class map will contain an "error" key under which the rule for "p.error" is found. The div element will have relevant rules in the id map (the key is the id) and the tag map. So the only work left is finding out which of the rules that were extracted by the keys really match.

For example if the rule for the div was:

It will still be extracted from the tag map, because the key is the rightmost selector, but it wouldn't match our div element, who does not have a table ancestor.

Both WebKit and Firefox do this manipulation.

Style sheet cascade order

The style object has properties corresponding to every visual attribute (all CSS attributes but more generic). If the property is not defined by any of the matched rules, then some properties can be inherited by the parent element style object. Other properties have default values.

The problem begins when there is more than one definition - here comes the cascade order to solve the issue.

A declaration for a style property can appear in several style sheets, and several times inside a style sheet. This means the order of applying the rules is very important. This is called the "cascade" order. According to CSS2 spec, the cascade order is (from low to high):

  • Browser declarations
  • User normal declarations
  • Author normal declarations
  • Author important declarations
  • User important declarations

The browser declarations are least important and the user overrides the author only if the declaration was marked as important. Declarations with the same order will be sorted by specificity and then the order they are specified. The HTML visual attributes are translated to matching CSS declarations . They are treated as author rules with low priority.

Specificity

The selector specificity is defined by the CSS2 specification as follows:

  • count 1 if the declaration it is from is a 'style' attribute rather than a rule with a selector, 0 otherwise (= a)
  • count the number of ID attributes in the selector (= b)
  • count the number of other attributes and pseudo-classes in the selector (= c)
  • count the number of element names and pseudo-elements in the selector (= d)

Concatenating the four numbers a-b-c-d (in a number system with a large base) gives the specificity.

The number base you need to use is defined by the highest count you have in one of the categories.

For example, if a=14 you can use hexadecimal base. In the unlikely case where a=17 you will need a 17 digits number base. The later situation can happen with a selector like this: html body div div p… (17 tags in your selector… not very likely).

Some examples:

Sorting the rules

After the rules are matched, they are sorted according to the cascade rules. WebKit uses bubble sort for small lists and merge sort for big ones. WebKit implements sorting by overriding the > operator for the rules:

Gradual process

WebKit uses a flag that marks if all top level style sheets (including @imports) have been loaded. If the style is not fully loaded when attaching, place holders are used and it is marked in the document, and they will be recalculated once the style sheets were loaded.

When the renderer is created and added to the tree, it does not have a position and size. Calculating these values is called layout or reflow.

HTML uses a flow based layout model, meaning that most of the time it is possible to compute the geometry in a single pass. Elements later "in the flow" typically don't affect the geometry of elements that are earlier "in the flow", so layout can proceed left-to-right, top-to-bottom through the document. There are exceptions: for example, HTML tables may require more than one pass.

The coordinate system is relative to the root frame. Top and left coordinates are used.

Layout is a recursive process. It begins at the root renderer, which corresponds to the <html> element of the HTML document. Layout continues recursively through some or all of the frame hierarchy, computing geometric information for each renderer that requires it.

The position of the root renderer is 0,0 and its dimensions are the viewport - the visible part of the browser window.

All renderers have a "layout" or "reflow" method, each renderer invokes the layout method of its children that need layout.

Dirty bit system

In order not to do a full layout for every small change, browsers use a "dirty bit" system. A renderer that is changed or added marks itself and its children as "dirty": needing layout.

There are two flags: "dirty", and "children are dirty" which means that although the renderer itself may be OK, it has at least one child that needs a layout.

Global and incremental layout

Layout can be triggered on the entire render tree - this is "global" layout. This can happen as a result of:

  • A global style change that affects all renderers, like a font size change.
  • As a result of a screen being resized

Layout can be incremental, only the dirty renderers will be laid out (this can cause some damage which will require extra layouts).

Incremental layout is triggered (asynchronously) when renderers are dirty. For example when new renderers are appended to the render tree after extra content came from the network and was added to the DOM tree.

Incremental layout.

Asynchronous and Synchronous layout

Incremental layout is done asynchronously. Firefox queues "reflow commands" for incremental layouts and a scheduler triggers batch execution of these commands. WebKit also has a timer that executes an incremental layout - the tree is traversed and "dirty" renderers are layout out.

Scripts asking for style information, like "offsetHeight" can trigger incremental layout synchronously.

Global layout will usually be triggered synchronously.

Sometimes layout is triggered as a callback after an initial layout because some attributes, like the scrolling position changed.

Optimizations

When a layout is triggered by a "resize" or a change in the renderer position(and not size), the renders sizes are taken from a cache and not recalculated…

In some cases only a sub tree is modified and layout does not start from the root. This can happen in cases where the change is local and does not affect its surroundings - like text inserted into text fields (otherwise every keystroke would trigger a layout starting from the root).

The layout process

The layout usually has the following pattern:

  • Parent renderer determines its own width.
  • Place the child renderer (sets its x and y).
  • Calls child layout if needed - they are dirty or we are in a global layout, or for some other reason - which calculates the child's height.
  • Parent uses children's accumulative heights and the heights of margins and padding to set its own height - this will be used by the parent renderer's parent.
  • Sets its dirty bit to false.

Firefox uses a "state" object(nsHTMLReflowState) as a parameter to layout (termed "reflow"). Among others the state includes the parents width.

The output of the Firefox layout is a "metrics" object(nsHTMLReflowMetrics). It will contain the renderer computed height.

Width calculation

The renderer's width is calculated using the container block's width, the renderer's style "width" property, the margins and borders.

For example the width of the following div:

Would be calculated by WebKit as the following(class RenderBox method calcWidth):

  • The container width is the maximum of the containers availableWidth and 0. The availableWidth in this case is the contentWidth which is calculated as:

clientWidth and clientHeight represent the interior of an object excluding border and scrollbar.

The elements width is the "width" style attribute. It will be calculated as an absolute value by computing the percentage of the container width.

The horizontal borders and paddings are now added.

So far this was the calculation of the "preferred width". Now the minimum and maximum widths will be calculated.

If the preferred width is greater than the maximum width, the maximum width is used. If it is less than the minimum width (the smallest unbreakable unit) then the minimum width is used.

The values are cached in case a layout is needed, but the width does not change.

Line Breaking

When a renderer in the middle of a layout decides that it needs to break, the renderer stops and propagates to the layout's parent that it needs to be broken. The parent creates the extra renderers and calls layout on them.

In the painting stage, the render tree is traversed and the renderer's "paint()" method is called to display content on the screen. Painting uses the UI infrastructure component.

Global and incremental

Like layout, painting can also be global - the entire tree is painted - or incremental. In incremental painting, some of the renderers change in a way that does not affect the entire tree. The changed renderer invalidates its rectangle on the screen. This causes the OS to see it as a "dirty region" and generate a "paint" event. The OS does it cleverly and coalesces several regions into one. In Chrome it is more complicated because the renderer is in a different process then the main process. Chrome simulates the OS behavior to some extent. The presentation listens to these events and delegates the message to the render root. The tree is traversed until the relevant renderer is reached. It will repaint itself (and usually its children).

The painting order

CSS2 defines the order of the painting process . This is actually the order in which the elements are stacked in the stacking contexts . This order affects painting since the stacks are painted from back to front. The stacking order of a block renderer is:

  • background color
  • background image

Firefox display list

Firefox goes over the render tree and builds a display list for the painted rectangular. It contains the renderers relevant for the rectangular, in the right painting order (backgrounds of the renderers, then borders etc).

That way the tree needs to be traversed only once for a repaint instead of several times - painting all backgrounds, then all images, then all borders etc.

Firefox optimizes the process by not adding elements that will be hidden, like elements completely beneath other opaque elements.

WebKit rectangle storage

Before repainting, WebKit saves the old rectangle as a bitmap. It then paints only the delta between the new and old rectangles.

Dynamic changes

The browsers try to do the minimal possible actions in response to a change. So changes to an element's color will cause only repaint of the element. Changes to the element position will cause layout and repaint of the element, its children and possibly siblings. Adding a DOM node will cause layout and repaint of the node. Major changes, like increasing font size of the "html" element, will cause invalidation of caches, relayout and repaint of the entire tree.

The rendering engine's threads

The rendering engine is single threaded. Almost everything, except network operations, happens in a single thread. In Firefox and Safari this is the main thread of the browser. In Chrome it's the tab process main thread.

Network operations can be performed by several parallel threads. The number of parallel connections is limited (usually 2 - 6 connections).

The browser main thread is an event loop. It's an infinite loop that keeps the process alive. It waits for events (like layout and paint events) and processes them. This is Firefox code for the main event loop:

CSS2 visual model

According to the CSS2 specification , the term canvas describes "the space where the formatting structure is rendered": where the browser paints the content.

The canvas is infinite for each dimension of the space but browsers choose an initial width based on the dimensions of the viewport.

According to www.w3.org/TR/CSS2/zindex.html , the canvas is transparent if contained within another, and given a browser defined color if it is not.

CSS Box model

The CSS box model describes the rectangular boxes that are generated for elements in the document tree and laid out according to the visual formatting model.

Each box has a content area (e.g. text, an image, etc.) and optional surrounding padding, border, and margin areas.

CSS2 box model

Each node generates 0…n such boxes.

All elements have a "display" property that determines the type of box that will be generated.

The default is inline but the browser style sheet may set other defaults. For example: the default display for the "div" element is block.

You can find a default style sheet example here: www.w3.org/TR/CSS2/sample.html .

  • Positioning scheme

There are three schemes:

  • Normal: the object is positioned according to its place in the document. This means its place in the render tree is like its place in the DOM tree and laid out according to its box type and dimensions
  • Float: the object is first laid out like normal flow, then moved as far left or right as possible
  • Absolute: the object is put in the render tree in a different place than in the DOM tree

The positioning scheme is set by the "position" property and the "float" attribute.

  • static and relative cause a normal flow
  • absolute and fixed cause absolute positioning

In static positioning no position is defined and the default positioning is used. In the other schemes, the author specifies the position: top, bottom, left, right.

The way the box is laid out is determined by:

  • Box dimensions
  • External information such as image size and the size of the screen

Block box: forms a block - has its own rectangle in the browser window.

Block box.

Inline box: does not have its own block, but is inside a containing block.

Inline boxes.

Blocks are formatted vertically one after the other. Inlines are formatted horizontally.

Block and Inline formatting.

Inline boxes are put inside lines or "line boxes". The lines are at least as tall as the tallest box but can be taller, when the boxes are aligned "baseline" - meaning the bottom part of an element is aligned at a point of another box other then the bottom. If the container width is not enough, the inlines will be put on several lines. This is usually what happens in a paragraph.

Lines.

Positioning

Relative positioning - positioned like usual and then moved by the required delta.

Relative positioning.

A float box is shifted to the left or right of a line. The interesting feature is that the other boxes flow around it. The HTML:

Will look like:

Float.

Absolute and fixed

The layout is defined exactly regardless of the normal flow. The element does not participate in the normal flow. The dimensions are relative to the container. In fixed, the container is the viewport.

Fixed positioning.

Layered representation

This is specified by the z-index CSS property. It represents the third dimension of the box: its position along the "z axis".

The boxes are divided into stacks (called stacking contexts). In each stack the back elements will be painted first and the forward elements on top, closer to the user. In case of overlap the foremost element will hide the former element.

The stacks are ordered according to the z-index property. Boxes with "z-index" property form a local stack. The viewport has the outer stack.

The result will be this:

Fixed positioning.

Although the red div precedes the green one in the markup, and would have been painted before in the regular flow, the z-index property is higher, so it is more forward in the stack held by the root box.

Browser architecture

  • Grosskurth, Alan. A Reference Architecture for Web Browsers (pdf)
  • Gupta, Vineet. How Browsers Work - Part 1 - Architecture
  • Aho, Sethi, Ullman, Compilers: Principles, Techniques, and Tools (aka the "Dragon book"), Addison-Wesley, 1986
  • Rick Jelliffe. The Bold and the Beautiful: two new drafts for HTML 5.
  • L. David Baron, Faster HTML and CSS: Layout Engine Internals for Web Developers.
  • L. David Baron, Faster HTML and CSS: Layout Engine Internals for Web Developers (Google tech talk video)
  • L. David Baron, Mozilla's Layout Engine
  • L. David Baron, Mozilla Style System Documentation
  • Chris Waterson, Notes on HTML Reflow
  • Chris Waterson, Gecko Overview
  • Alexander Larsson, The life of an HTML HTTP request
  • David Hyatt, Implementing CSS(part 1)
  • David Hyatt, An Overview of WebCore
  • David Hyatt, WebCore Rendering
  • David Hyatt, The FOUC Problem

W3C Specifications

  • HTML 4.01 Specification
  • W3C HTML5 Specification
  • Cascading Style Sheets Level 2 Revision 1 (CSS 2.1) Specification

Browsers build instructions

  • Firefox. https://developer.mozilla.org/Build_Documentation
  • WebKit. http://webkit.org/building/build.html

Translations

This page has been translated into Japanese, twice:

  • How Browsers Work - Behind the Scenes of Modern Web Browsers (ja) by @ kosei
  • ブラウザってどうやって動いてるの?(モダンWEBブラウザシーンの裏側 by @ikeike443 and @kiyoto01 .

You can view the externally hosted translations of Korean and Turkish .

Thanks everyone!

Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License , and code samples are licensed under the Apache 2.0 License . For details, see the Google Developers Site Policies . Java is a registered trademark of Oracle and/or its affiliates.

Last updated 2011-08-05 UTC.

Web Browser

  • Google Chrome Browser
  • Mozilla Firefox Browser
  • Microsoft Edge Browser
  • Apple Safari Browser
  • Tor Browser
  • Opera Browser
  • DuckDuckGo Browser
  • Brave Browser
  • CBSE Class 12 Informatics Practices Syllabus 2023-24
  • CBSE Class 12 Informatics Practices 2023-24 Distribution of Marks
  • CBSE Class 12 Informatics Practices Unit-wise Syllabus
  • Unit 1 Data Handling using Pandas I
  • Introduction to Python Libraries Series
  • Importing and Exporting Data between CSV
  • Files and DataFrames
  • Pandas Series Vs NumPy ndarray
  • Data Handling using Pandas II
  • Introduction
  • Descriptive Statistics
  • Data Aggregations
  • Sorting a DataFrame
  • Group by Functions
  • Handling Missing Values
  • Plotting using Matplotlib
  • Linewidth and Line Style
  • The Pandas Plot Function Pandas Visualisation
  • Plotting a Line chart
  • Plotting Bar Chart
  • Plotting Histogram
  • Plotting Scatter Chart
  • Plotting Quartiles and Box plot
  • Plotting Pie Chart
  • Unit 4 Societal Impacts
  • Digital Footprints
  • Net Etiquettes
  • Intellectual Property Right
  • Violation of IPR
  • Public Access and Open Source Software
  • Cyber Crime
  • Phishing and Fraud Emails
  • Indian Information Technology Act
  • E waste Hazards and Management

When we need any kind of information most of the time we get help from the Internet, and we get information. The Internet provides us with useful information easily. We use mobile phones, computers, and tablets. We search for a lot of things in our daily lives, so we get information about all over the world, but we can not get information by just only getting connected to the Internet. We need a platform where we can search for our questions. The platform that provides such kinds of services is called a web browser, without a web browser internet will not be able to provide information.

What is a Web Browser?

The web browser is an application software to explore www ( World Wide Web) . It provides an interface between the server and the client and it requests to the server for web documents and services. It works as a compiler to render HTML which is used to design a webpage. Whenever we search for anything on the internet, the browser loads a web page written in HTML, including text, links, images, and other items such as style sheets and JavaScript functions. Google Chrome, Microsoft Edge, Mozilla Firefox, and Safari are examples of web browsers.

History of the Web Browsers

The first web browser World Wide Web was invented in the year of 1990 by Tim Berners-Lee. Later, it becomes Nexus. In the year of 1993, a new browser Mosaic was invented by Mark Andreessen and their team. It was the first browser to display text and images at a time on the device screen. He also invents another browser Netscape in 1994. Next year Microsoft launched a web browser Internet Explorer which was already installed in the Windows operating system. After this many browsers were invented with various features like Mozilla Firefox, Google Chrome, Safari, Opera, etc. For more detail refer this article: History of Web Browsers

The choice of a web browser depends on the user’s preference and requirements. To know more about individual browsers please go through our Web Browser – A Complete Overview

How does a Web Browser Work?

A web browser helps us find information anywhere on the internet. It is installed on the client computer and requests information from the web server such a type of working model is called a client-server model.

essay about web browser

Client-server model

The browser receives information through HTTP protocol. In which transmission of data is defined. When the browser received data from the server, it is rendered in HTML to user-readable form and, information is displayed on the device screen. 

Website Cookies

When we visited any website over the internet our web browser stores information about us in small files called cookies. Cookies are designed to remember stateful information about our browsing history. Some more cookies are used to remember about us like our interests, our browsing patterns, etc. Websites show us ads based on our interests using cookies. 

Some Popular Web Browsers

Here is a list of 7 popular web browsers:

1. Google Chrome:

Developed by Google, Chrome is one of the most widely-used web browsers in the world, known for its speed and simplicity.

2. Mozilla Firefox:

Developed by the Mozilla Foundation, Firefox is an open-source browser that is known for its privacy features and customization options.

3. Apple Safari:

Developed by Apple, Safari is the default browser on Mac and iOS devices and is known for its speed and integration with other Apple products.

4. Microsoft Edge:

Developed by Microsoft, Edge is the default browser on Windows 10 and is known for its integration with other Microsoft products and services.

5. Tor Browser:

Developed by The Tor Project, Tor Browser is a web browser that is designed for anonymous web browsing and is based on Mozilla Firefox.

Developed by Opera Software, Opera is a web browser that is known for its speed and built-in VPN feature.

Developed by Brave Software, Brave is a web browser that is focused on privacy and security and blocks third-party ads and trackers by default.

These are some of the most popular web browsers, there are other browsers available such as Vivaldi, Waterfox, and so on. The choice of a web browser depends on the user’s preference and requirements.

author

Please Login to comment...

Similar reads.

  • School Learning
  • School Programming
  • Web Browsers

Improve your Coding Skills with Practice

 alt=

What kind of Experience do you want to share?

Do a more advanced search »

Essays about: "web browser"

Showing result 1 - 5 of 215 essays containing the words web browser .

1. Dark Web Forensics : An Investigation of Tor and I2P Artifacts on Windows 11

Author : Seyedhesam Abolghsemi ; Chukwudalu Chukwuneta ; [2024] Keywords : Digital Forensics ; Tor ; I2P ; Dark Web ; Privacy ; Anonymity ; Windows 11 ; Browser Forensics. ;

Abstract : With the rising use of the Internet by businesses and individuals for their regular activities and transactions, there has been increased attention to user privacy and data security on the web. While the adoption of dark web networks has ensured that users' privacy and anonymity concerns are being addressed, there has also been a consequential increase in illicit activities on the internet. READ MORE

2. Lost in Revocation : X.509 WebPKI Certificate Replacement & Revocation practises

Author : Cerenius David ; Kaller Martin ; [2024] Keywords : ;

Abstract : Being able to revoke currently valid certificates is essential to the security of the WebPublic-Key Infrastructure (WebPKI), yet today’s most widely deployed revocation pro-tocols (CRLs and OCSP) have numerous limitations, mainly added latency and compro-mised privacy, leading to almost no major browser fully checking certificate revocations.Consequently, web users are left oblivious to the legitimacy of certificates and vulnerableto man-in-the-middle attacks. READ MORE

3. WebAssembly Performance Analysis: A Comparative Study of C++ and Rust Implementations

Author : Rishi Kiran Aiyatham Prabakar ; [2024] Keywords : C ; Chrome’s DevTools ; Emscripten ; Rust ; Web Browsers ; WebAssembly ; WAT ;

Abstract : Background: In the contemporary digital landscape, web browsers have evolved from mere text renderers to sophisticated platforms supporting interactive 3D visualizations, multimedia, and gaming applications. This evolution has been catalysed by the advent of WebAssembly, a binary instruction format designed to enhance browser performance by executing high-level language code with near-native efficiency. READ MORE

4. SOFTWARE TEST AUTOMATION : Implementation of End-to-End testing in web application

Author : Maria Björkman ; [2024] Keywords : Software Testing ; Test Automation ; End-to-End ; DSRM ; JavaScript ; Mjukvarutesting ; Testautomatisering ; End-to-End ; DSRM ; JavaScript ;

Abstract : Today’s software applications are often scattered in many layers, suchas connected to cloud services or third-party solutions. This makes itimportant to ensure that a software application works as intended ina real-world setting, especially when changes are made to the codebase. READ MORE

5. Peering into the Dark : A Dark Web Digital Forensic Investigation on Windows 11

Author : Johanna Kahlqvist ; Frida Wilke ; [2023] Keywords : Dark Web ; Digital Forensics ; The Onion Router Tor ; Windows 11 ;

Abstract : The ability to access the Internet while remaining anonymous is a necessity in today's society. Whistleblowers need it to establish contact with journalists, and individuals living under repressive regimes need it to access essential resources. READ MORE

Searchphrases right now

  • Practice Guidelines
  • value IT Business
  • group organisation
  • Sustainability transition
  • importance of study for a student
  • example of research and development thesis
  • B2B Business to business
  • cultural in management decision making
  • features of organization
  • prototyping in application development

Popular searches

  • social effects of green technology
  • degree level
  • importance of customer
  • the internet and academic research
  • Reasons Why people communicate
  • literature review information security
  • environmental factors that influence international business
  • internet in tourism industry pdf
  • Role of internet in banking
  • the role of international manager
  • difficulties in developing new products
  • importance of agriculture

popular essays yesterday (2024-07-03)

  • Yacht Fin Stabilisers : Enhancing the User Experience and Sustainability in the Field of Industrial Design Engineering
  • In the Spotlight or Blending In: Navigating Product Placement Interaction Levels in VR Gaming
  • The impact of climate change anxiety on travel decision-making among Gen-Z university students in Sweden
  • Failure after success - A study of extreme right party failure in Eastern Europe
  • Structural Integrity Analysis of Hydrofoil on a marine vessel
  • Towards Sustainable Aviation: Business Models for Sustainable Aviation Fuels
  • Cultivating a Research-based Approach to Develop Teaching/learning in Teacher Education.
  • An Exploratory Study: The Impact of Lean Implementation on Product Innovation
  • Exogenous Fault Detection in Aerial Swarms of UAVs
  • Popular complementary terms: advantages, disadvantages, thesis, role of, example, importance, trend, impact, case study.

See yesterday's most popular searches here . Essays.se is the english language version of Uppsatser.se .

  • Computer Science and Engineering
  • Software Engineering
  • Web Browsers

Comparative study of modern web browsers based on their performance and evolution

  • December 2013
  • Conference: 2013 IEEE International Conference on Computational Intelligence and Computing Research (ICCIC)
  • This person is not on ResearchGate, or hasn't claimed this research yet.

Discover the world's research

  • 25+ million members
  • 160+ million publication pages
  • 2.3+ billion citations

No full-text available

Request Full-text Paper PDF

To read the full-text of this research, you can request a copy directly from the authors.

  • BMC Geriatr
  • Riley Chang

Andrew McDonald

  • cProf. Ujjwala Gaikwad

Tresa Sangeetha

  • Hamed Almamari
  • A. Muhammad Syafar
  • Muhammad Agung
  • Widya Wisanty
  • Jheny Neriza Amanda
  • Yolvy Quintero Cordero

Joan Fernando Chipia Lobo

  • Sandra Lobo
  • Senthil Kumar

Porselvi Thayumanavan

  • Ali Khamis Hamed Almamari
  • Joao De Macedo
  • Rui Pereira
  • João Saraiva

Heidi Lam

  • Kevin M. Murphy
  • Shaun Kaasten

Saul Greenberg

  • R.T. Fielding
  • R.N. Taylor
  • Tirupathi R. Chandrupatla

Ashok Belegundu

  • Jordan Nielson
  • Carey Williamson
  • Martin Arlitt
  • Carmen Badea
  • Mohammad R. Haghighat

Alexandru Nicolau

  • Alexander V. Veidenbaum
  • J COMPUT-MEDIAT COMM

Vanitha Swaminathan

  • Neil Randall
  • R. Fielding
  • Day Software
  • Richard N. Taylor
  • Recruit researchers
  • Join for free
  • Login Email Tip: Most researchers use their institutional email address as their ResearchGate login Password Forgot password? Keep me logged in Log in or Continue with Google Welcome back! Please log in. Email · Hint Tip: Most researchers use their institutional email address as their ResearchGate login Password Forgot password? Keep me logged in Log in or Continue with Google No account? Sign up

IEEE Account

  • Change Username/Password
  • Update Address

Purchase Details

  • Payment Options
  • Order History
  • View Purchased Documents

Profile Information

  • Communications Preferences
  • Profession and Education
  • Technical Interests
  • US & Canada: +1 800 678 4333
  • Worldwide: +1 732 981 0060
  • Contact & Support
  • About IEEE Xplore
  • Accessibility
  • Terms of Use
  • Nondiscrimination Policy
  • Privacy & Opting Out of Cookies

A not-for-profit organization, IEEE is the world's largest technical professional organization dedicated to advancing technology for the benefit of humanity. © Copyright 2024 IEEE - All rights reserved. Use of this web site signifies your agreement to the terms and conditions.

  • Advantages And Disadvantages Of Internet Essay

Advantages and Disadvantages of Internet Essay

500+ words advantages and disadvantages of internet essay.

The internet plays a significant role in the lives of people today. It is a valuable source of information that helps people share information and communicate with anyone sitting in any corner of the country with an internet connection. But, with many advantages, there are also disadvantages to the internet. With the help of ‘Advantages and Disadvantages of the Internet’ essay, we will throw light on both these aspects. We have also compiled a list of CBSE Essays for students to boost their essay-writing skills. It contains sample essays on several topics, which will give ideas to students and help them write effective essays.

Advantages of the Internet

The role of the internet in the modern world cannot be understated. Nowadays, every person uses the internet to do their daily tasks. People in different fields like offices, schools, colleges, hospitals etc., use their electronic devices like laptops, computers, tablets, cell phones etc., to make their work simple and fast. The internet has also made access to information easier. We can learn about the whole universe with just a single click by using the internet. We can easily communicate and share information with other people around the world with the help of email, instant messaging, video calls etc.

The internet delivers a wide variety of advantages. It not only enables people to share information but also serves as a place to store information and media digitally. This feature has benefitted the fields of education and research the most. We have seen a boom in the e-commerce business as they have used the internet and provided a seamless experience of buying and selling products online. It has created a large market for online retailers and integrated different business fields. Due to this facility, people can now purchase almost everything they need and get it delivered right to their doorstep in a few days. Many services are now provided on the internet, such as online booking, banking, hotel reservations etc.

The internet has made everything a lot more accessible and quick. Most organisations around the world advertise their vacancies on the internet. So, people can search for different types of jobs around the world. The internet provides different types of entertainment to people; be it music, movies, theatre, entertainment, live matches, or live broadcasts. It also helps students to continue their learning through online education.

It is difficult to name all of the benefits and advantages of the internet. This is because the internet has become so entangled and integrated into our daily lives that it has an influence on everything we experience around us.

Disadvantages of the Internet

Although the internet has many advantages, it also has some disadvantages. In the next section of the advantages and disadvantages of the internet essay, let us discuss the disadvantages and the possible risks associated with the modern-day applications of the internet.

While the internet provides us with all tools, products and services we need right at our doorstep, at the same time, it isolates us from the world outside. As we get more accustomed to ordering everything online, be it clothes, food, drinks, grocery, commodities, or even paying bills, getting out of the house has become less frequent. This has caused health issues and various mental health issues such as social anxiety, insomnia and even depression. Teenagers and kids are the most influenced by the internet as they are the generation which has seen the immense use of the internet. They are moulded to a life dependent on the internet. This hinders their learning capabilities and real-life problem-solving skills because they are accustomed to using their mobile for every task.

Today, the internet is the most popular source of viruses in electronic gadgets. As we perform various activities on the internet, we are exposing ourselves to various threats such as malicious software and viruses. Due to these viruses, confidential data may be accessed by unauthorised people or hackers. Some websites contain immoral materials in the form of text, pictures or movies. These websites damage the character of the new generation, especially kids and teenagers. A lot of time is wasted collecting information on the internet. Many people become addicted to spending time on the internet, like chatting with friends or playing games. A lot of information about a particular topic is stored on websites. Some information may be incorrect or not authentic. So, it becomes difficult to select the correct information.

From the information covered in this advantages and disadvantages essay, it can be said that the benefits of the internet outweigh the disadvantages and threats it brings. The responsibility to be safe falls on the users themselves. One needs to stay vigilant and perform regular security checks on their network and computing devices to ensure they are secure from any online attacks. Provided that all government regulations for safe internet browsing are followed and appropriate measures are taken.

Students must have found the ‘Advantages and Disadvantages of Internet’ essay useful for improving their essay writing skills. Visit BYJU’S website to get the latest updates and study materials for CBSE/ICSE/State Board/Competitive Exams.

CBSE Related Links

Leave a Comment Cancel reply

Your Mobile number and Email id will not be published. Required fields are marked *

Request OTP on Voice Call

Post My Comment

essay about web browser

Register with BYJU'S & Download Free PDFs

Register with byju's & watch live videos.

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here .

Loading metrics

Open Access

Peer-reviewed

Research Article

Quantifying the web browser ecosystem

Affiliation Department of Computer Science, University of Haifa, Haifa, Israel

Affiliation Department of Information Systems, University of Haifa, Haifa, Israel

* E-mail: [email protected]

Affiliation Department of Information and Knowledge Management, University of Haifa, Haifa, Israel

ORCID logo

Affiliation LeBow College of Business, Drexel University, Philadelphia, PA, United States of America

  • Sela Ferdman, 
  • Einat Minkov, 
  • Ron Bekkerman, 
  • David Gefen

PLOS

  • Published: June 23, 2017
  • https://doi.org/10.1371/journal.pone.0179281
  • Reader Comments

Fig 1

Contrary to the assumption that web browsers are designed to support the user, an examination of a 900,000 distinct PCs shows that web browsers comprise a complex ecosystem with millions of addons collaborating and competing with each other. It is possible for addons to “sneak in” through third party installations or to get “kicked out” by their competitors without user involvement. This study examines that ecosystem quantitatively by constructing a large-scale graph with nodes corresponding to users, addons, and words (terms) that describe addon functionality. Analyzing addon interactions at user level using the Personalized PageRank (PPR) random walk measure shows that the graph demonstrates ecological resilience . Adapting the PPR model to analyzing the browser ecosystem at the level of addon manufacturer, the study shows that some addon companies are in symbiosis and others clash with each other as shown by analyzing the behavior of 18 prominent addon manufacturers. Results may herald insight on how other evolving internet ecosystems may behave, and suggest a methodology for measuring this behavior. Specifically, applying such a methodology could transform the addon market.

Citation: Ferdman S, Minkov E, Bekkerman R, Gefen D (2017) Quantifying the web browser ecosystem. PLoS ONE 12(6): e0179281. https://doi.org/10.1371/journal.pone.0179281

Editor: Hussein Suleman, University of Cape Town, SOUTH AFRICA

Received: December 15, 2016; Accepted: May 11, 2017; Published: June 23, 2017

Copyright: © 2017 Ferdman et al. This is an open access article distributed under the terms of the Creative Commons Attribution License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.

Data Availability: All relevant data are hosted at figshare at the following URL: https://doi.org/10.6084/m9.figshare.5063332.v1 .

Funding: The authors received no specific funding for this work.

Competing interests: The authors have declared that no competing interests exist.

Introduction

Web browsers have become a major component of the routine human-computer interaction, with some operating systems based entirely on browsers (e.g., ChromeOS by Google [ 1 ]). Browser extensions, also known as addons , are computer programs that (as the name suggests) extend, improve, and personalize browser capabilities. More than 750 million addons were downloaded and installed by Google Chrome browser users as of June 2012 [ 2 ]. Some examples of addons include an extension that allows visually impaired users to access the content of bar charts on the Web [ 3 ], an extension that addresses users’ security concerns by seamlessly producing a unique password for each website the user accesses [ 4 ].

Internet software companies are very interested in installing their addons, and particularly toolbars , on users’ machines. Toolbars are GUI widgets that typically reside in the upper part of the browser’s window, extending the browser’s functionality. Toolbars can collect information about the browsing history of the user (e.g., Yahoo! Toolbar [ 5 ]) and can redirect user search activity to a specific search portal (e.g. MyWebSearch.com). Crucially, the company that owns the search portal, and typically also the toolbar, receives payments from ad providers per user click on the ads it displays (primary ad providers are Google and Yahoo!). This revenue generation model is used extensively by software companies that distribute freeware products [ 6 ]. For example, 45% of AVG Antivirus Technologies sales in 2012 were generated by its browser toolbar [ 7 ]. It was estimated that Google, the biggest Web advertising firm, might have lost 1.3 billion in revenue in 2013 because of changes to its policy with respect to toolbars and a resulting shift of some addon distributors to Google’s competitors [ 8 , 9 ].

Consequently, addons compete with each other over resources (such as battery, memory, disk space, and computing power) and user attention. Regardless of how intelligent they are, they may be aware of each other and may “piggyback” on each other or uninstall each other. Addon behavior within the Web browser is characterized by addons making their own decisions independently and often unbeknown to the user, which comprises a complex ecosystem with the user being just one of the participants. A key issue in understanding that ecosystem, responding to it, regulating it, and transforming it into a mature market is the current inability to show that it is inherently stable and measurable. This study addresses that issue.

More broadly, the Web browser ecosystem is characteristic of the types of systems discussed in the seminal paper by Russell et al. [ 10 ] that poses core questions about the legal, ethical, and structural regulation of decisions that can be made by intelligent systems that compose of both human and machine decision making. Past research into that arena looked into Human-Computer Interaction (e.g., [ 11 ]), mostly being concerned with how one human communicates with one machine, or how humans communicate with each other with the help of machines. Likewise, Multi-Agent Systems research (e.g., [ 12 ]) deals with cooperation between machines, while overly ignoring environments in which machines do not cooperate with each other, are not designed to do so, or are unaware of each other. In contrast, this paper deals with the wider ecosystem in which machines both compete and collaborate with each other.

Addressing such a dynamic ecosystem, this paper shows the applicability of a Personalized PageRank (PPR) random walk in the heterogeneous graph of users, addons, and addon description terms, to quantify the Web browser ecosystem. This could be a first step toward monitoring and regulating independent machine behavior. An example of independent machine behavior within the addon ecosystem is an antivirus tool that is being installed on a laptop: what should it do about another antivirus tool that was preinstalled on that laptop? Such questions are becoming more pertinent in the context of addons because—while browser extensions can be installed proactively—they are often “silently” installed on one’s machine by a third party, typically, as the user downloads some other program or installs a “software bundle”. We find that these questions addressed by this research are both of theoretical significance, as well as of much economic impact.

Research questions and their addon ecosystem context

The Web browser ecosystem is a complex evolving one. Addons are installed and uninstalled on user machines. New addons introduced by software companies become prevalent or fade over time. New addon companies enter, and older players gain or lose power. Companies establish partnerships or compete with each other (and sometimes both). To mention but a few of its dynamic characteristics. These developments occur solely within the digital media—addons being software executables—with each addon having a lifecycle of events and a spectrum of interactions with its environment. All this happens on a daily basis and is mostly hidden from the user who may not even be aware of the vibrant “life” on his/her Web browser.

Addons are in a symbiotic relationship when at least one of them benefits from the other. For example, an addon may get installed on a user’s machine during (or following) the installation of another addon. This can be considered a direct benefit to the latter addon, as it would not have reached the machine if the other addon was not installed on it. Often, addons of the same company are installed in a bundle. In some cases, addons companies may even have a distribution agreement such that one company provides the means for installing the other company’s addons. Clashes occur when an addon “kicks out” other addons. There are a variety of reasons for a clash. A clash may happen, for example, when one company’s addon removes another company’s addon because the two companies’ products directly compete with each other. Of course, also the user (i.e. the computer owner) plays an important role in the addon ecosystem: some users “hunt down” and remove addons that occasionally appear in the computer’s browser; other users are more tolerant—they let addons live in the browser for a long time and do not mind more addons to be installed over time.

All these processes occur in the Web browser habitat . This habitat is observably ecologically stable (browsers do not crash frequently) and shows resilience : if not disturbed, the habitat will remain approximately the same, and if disturbed from outside then it will “remember” its stable state and try to recover.

Addressing the research objective of quantifying independent machine behavior in the context of addon ecosystems, the first research question aims to establish that the Web browser addon habitat can be verified as resilient. Building on that verification, the next research questions address the symbiosis and clash characteristics of that habitat.

RQ1: Can Web browser habitat resilience be verified?

RQ2: Can the degree of addon symbiosis and clash be measured?

The research questions are addressed by analyzing records of user-addon associations collected from anonymous users all over the world. The original data consisted of the list of addons detected per user, including their textual descriptions and installation paths. That data was cast into a relational graph in which typed nodes correspond to distinct user , addon , and term objects. In this representation a habitat observed on an individual user’s machine forms a star-shaped sub-graph in which a node corresponding to the user is linked to nodes corresponding to the addons that reside on that user’s machine. Those addon nodes may be further linked to lexical terms , derived from their textual descriptions. Multiple habitats can be connected in the joint graph. For example, each addon is directly connected to all the users that have it installed. The graph representation is compact, supporting efficient processing of large-scale data. Importantly, graph-theoretic methods can be employed to assess structural inter-node relatedness.

The ability to verify habitat resilience (RQ1) is measured by showing that if a random addon is removed from the habitat then, given the identity of the remaining addons in that habitat and inter-habitat relationships as registered on the relational graph, the missing addon can be identified. The significance of being able to do so is shown by verifying that a Personalized PageRank (PPR) random walk is better than a “one-fits-all” method such as the popular choice method. Given that habitat resilience can be verified, the subsequent RQ2 research question shows that two defining characteristics of a habitat, symbiosis and clash, can also be measured by assessing the relationships among addon companies. A graph-based measure of relative importance is employed for this purpose. The results suggest the possibility to monitor and regulate independent machine behavior. We claim that PPR may be a candidate algorithm for doing so, and show its ability to detect business alliances and rivalries in digital media.

Related research

Gaining insight from biology to computer science is a topic of ongoing research [ 13 ]. Examples include the popular analogy of malicious software to viruses [ 14 ], the study of epidemic propagation in networks [ 15 ], the comparison of information dissemination on social networks to an evolutionary process [ 16 ], and more. This study follows in the footsteps of previous research that outlined an analogy between biological ecosystems and the collective behavior of players, or processes, in the software industry. While that literature, discussed next, is theoretical and anecdotal, this study reports empirical results using real-world data that shows characteristics of software ecosystems arguably similar to those of biological ecosystems. The next sections will define ecosystems in the context of previous research, and then survey research related to the methodology used in this study.

Business and software ecosystems

It has long been suggested that companies should not be viewed as individual entities, but rather as part of a business ecosystem [ 17 , 18 ]. Applying this paradigm, companies might be thought of as corresponding to species in a biological ecosystem. Like its biological counterpart, a business ecosystem is assumed to gradually develop from a collection of elements to a structured community, and, likewise, each member of a business ecosystem ultimately shares the fate of the network as a whole, regardless of its relative strength.

To put this study into perspective, we overview recent research focused on software ecosystems [ 19 – 22 ], studying the complex relationships among companies in the software industry. Manikas and Hansen [ 21 ] defined a software ecosystem as the interaction of a set of actors on top of a common technological platform that results in a number of software solutions or services. As an example, they considered the iOS ecosystem in which Apple provides a platform for selling applications in return for a yearly fee and 30% of revenues of application sale. According to Manikas and Hansen, software ecosystems are characterized with a wide spectrum of symbiotic relationships: two actors might have mutual benefits, be in direct competition (antagonism), be unaffected (neutralism), or be in a position where one company is unaffected while the other is benefiting (amensalism) or harmed (parasitism) by their relationship. Manikas and Hansen noted that little research had been done in the context of real-world ecosystems. Other researchers used the term “software ecosystems” to describe more technical aspects concerning the development of software systems that involves multiple players and must adapt to new environment or requirements [ 23 , 24 ].

To the best of our knowledge, the current work is the first that studies interactions between players in the Web browsers addons domain. This Web browser ecosystem differs from organization-centric software ecosystems previously studied in the literature (e.g., [ 25 ]) where an organization develops a software ecosystem around its offering, such as in the case of Salesforce that created a marketplace of third-party extensions to its products [ 26 ]. In the Web browser ecosystem there is no organization that can regulate addon behavior. Moreover, browser addons can interact directly with each other, even removing each other from the user’s machine, which is not allowed in a the regulated ecosystem of an organization. Jansen and Cusumano [ 26 ] found that a significant difference between the software and ecological ecosystems is that software species can “consciously” decide to exit the ecosystem as opposed to species in a biological ecosystem. That distinction, however, may not readily apply to the browser addon ecosystem because addons do not leave the system at their own will—once installed, only external factors limit their survival.

The resilience of a biological ecosystem is defined as the amount of disturbance that it can withstand without changing its self-organized processes and structures [ 27 ], or as the time required for the ecosystem to return to its stable state after a perturbation [ 28 ]. Dhungana et. al. [ 20 ] define a sustainable software ecosystem as one that can survive a significant habitat changes coming from competitors. Along the same lines, this study defines ecological resilience as the ability of a Web browser ecosystem that is artificially disturbed by extracting an existing addon to “remember” its original state to the extent that the missing addon can be predicted. Such ecological memory is a main component of ecological resilience, playing a major role in reorganization of ecosystems [ 29 ]. Ecological memory includes the biological legacies within habitat and the genetic composition of populations. As described by Schaefer [ 30 ], ecological memory is encapsulated in soil properties, spores, seeds, stem fragments, species, populations and other remnants that influence the composition of the replacement ecosystem and may also support ecological restoration. In particular, an internal component of ecological memory consists of remnants of species in the immediate area and an external component consists of the surrounding areas. The internal component of the addon ecosystem studied in this research corresponds to the addons installed at the habitat of an individual user, and the surrounding areas–to the objects that directly connect with the user’s environment in the global graph.

Graph-based data representation

The definitions above imply that an ecosystem can be presented as a set of objects that interact in various ways among themselves, and possibly with other environmental objects. Such relational schema is naturally represented using a heterogeneous typed graph in which nodes denote entities and edges denote inter-entity relationships [ 31 , 32 ]. A plethora of well-studied and efficient methods exists that can identify global phenomena in such a graph and evaluate the relatedness between remote entity pairs [ 33 , 34 ]. Nonetheless, only few studies analyzed ecosystems using graph-based quantifiable measures. One such study was conducted by Blincoe et. al. [ 24 ] who aimed at identifying ecosystems among software projects developed in the GitHub platform [ 35 ]. Blincoe et. al. constructed a graph in which vertices denoted software projects on GitHub and edges represented technical cross-project references. Multi-project ecosystems in their graph were then identified using a community detection method and displayed visually. This study takes graphing ecosystems a step further. The current study uses quantifiable graph measures to establish that addons form an ecosystem that is resilient and then to detect collaboration and adversary relations between the members of the ecosystem.

To establish that addons form an ecosystem that is resilient, resilience is formalized as a link prediction problem. The general task of link prediction aims at estimating whether a link should exist between two disconnected nodes in a graph based on the graph’s structure [ 32 , 36 – 38 ]. Link prediction is often used for recommendation purposes such as in online social networks where it is applied to identify likely but “missing” positive links that can then be recommended as promising friendships [ 39 ] and such as automatic enrichment of knowledge bases that are represented as a relational graph with missing edges [ 40 ]. Often, link prediction is evaluated by removing known existing edges, and evaluating the extent to which these edges can be recovered based on the remaining graph.

The current study utilizes that ability to address RQ1 using the PageRank method [ 41 , 42 ] and its Personalized PageRank (PPR) variant (sometimes referred to as random walks with restart (RWR), see [ 43 ]). The well-known PageRank model applies a random walk process where at each step a random walker stochastically chooses to either traverse an outgoing link or to jump (“restart”) to a random node in the graph. This random walk process converges to a stationary node probability distribution in which the scores of the nodes represent their structural centrality in the graph. The main drawback of the PageRank model is that it fails to incorporate node-specific context. The Personalized PageRank method addresses this shortcoming by applying a minor enhancement: rather than “jumping” to some node uniformly at random, the restart operation is confined to a distribution of interest which is referred to as a query . In such a setting the PPR score of a given node reflects its relevance with respect to the query.

The Personalized PageRank random walk metric has been applied to a large variety of tasks, including ranking Web pages and influential social media users with respect to topics of interest [ 44 , 45 ] and personalized and context-sensitive item recommendation [ 46 ]. In addition to Web networks and social media, PPR has been successfully applied to other domains, including personal information management [ 31 ], computational linguistics [ 47 ], and computational biology [ 48 ].

A few previous studies attempted to automatically identify competition relationships between companies that offer similar products and thus compete over market share. Most existing studies used text documents as their main information source (e.g., [ 49 , 50 ]). In the context of the current study, collaboration (or symbiosis ) is defined as co-existence in the same habitat (same user browsers) while competition (or clash ) as addons eliminating each other. Notably, the graph contains no explicit indication on positive or negative relationships between nodes so existing methods (e.g., [ 51 , 52 ]) cannot be applied to infer symbiosis from positive links and clashes from negative links. Instead, those relationships must be uncovered solely based on the graph structure in an unsupervised manner.

The current study examined large-scale authentic data describing browser addons installed on real users’ computers. These data were collected from users all over the world who agreed to anonymously share this information. It is a common scenario that users maintain multiple browsers. For example, Microsoft Internet Explorer is pre-installed on Windows machines, and many users install an additional browser. The database lists addons installed on multiple browsers, including Microsoft Internet Explorer , Mozilla Firefox , and Google Chrome . The data were stored using a relational database on the cloud at Amazon RDS. As of 2013, the dataset included over 1.5 billion records. For the purpose of this study, a subset of the data was considered. That subset included all of the records collected over a period of two months between August 1, 2013 and October 1, 2013. For every user there could be multiple records collected, describing a snapshot of his/her machine on a daily basis. As the length and frequency of data collection were inconsistent over time and across users, only the records collected at the earliest date per user were considered. Overall, the dataset contains 17,942,715 user-addon associations that correspond to 907,844 distinct users and 256,458 distinct addon descriptions. Fig 1 shows the distribution of the number of addons installed per user machine. As shown, most users had between 9 and 21 addons on their machines.

thumbnail

  • PPT PowerPoint slide
  • PNG larger image
  • TIFF original image

https://doi.org/10.1371/journal.pone.0179281.g001

  • Addon type. These form a closed set, where prevalent values are ‘extension’, ‘toolbar’, or ‘BHO’ (Browser Helper Object, an Internet Explorer addon).
  • File name. This includes the full path at which the addon software is installed on the user machine.
  • Name. Addon’s name.
  • Description. A textual description of the addon’s functionality.

Figs 2 and 3 show two addon records associated with two different users. The information specified is browser-dependent and sometimes missing. In these two cases, addon descriptions are missing for the first user and the path information is missing for the second. For each user–addon pair at least one attribute (path, name, or description) is guaranteed to be present in the data.

thumbnail

https://doi.org/10.1371/journal.pone.0179281.g002

thumbnail

https://doi.org/10.1371/journal.pone.0179281.g003

Importantly, similar addon software may be described by multiple different records, i.e., the addon records lack normalization. Fig 4 illustrates this variability across records. Sources of variance include different installation paths, availability or absence of attribute values, and different software version numbers (e.g., 1.8.7.2 vs. 1.6.4.6 in Fig 4 ). Furthermore, the user base is international and is therefore multilingual. The database includes no tracking of user’s or other programs’ actions. It was therefore impossible to determine which party initiated the installation (or removal) of an addon.

thumbnail

https://doi.org/10.1371/journal.pone.0179281.g004

Graph representation

  • User. An individual user is represented as a graph node that carries his/her unique user id.
  • Addon. These nodes correspond to specific addons, defined as the concatenation of all of the addon’s attributes; namely, file path, addon name, and description. Addon names often include full file-system path information such as “C:/Program Files (x86)/Skype/skype1.dll”. To avoid registering an addon twice solely due to minor discrepancies in the installation process path prefixes, such as “C://Program Files (x86)/skype”, were removed. Additionally, addons with slightly different names, such as different version numbers, were unified by the random walk. This was done by splitting addon names into tokens and linking the respective addon and term nodes to maintain connectivity between multiple versions of the same addon .
  • Term. The text strings that comprise addon names was parsed into individual terms, represented as graph nodes, as illustrated below.

thumbnail

https://doi.org/10.1371/journal.pone.0179281.t001

There are two types of edges in the graph. The first type represents the structural association between each user and each addon installed on his/her machine. The second type links each addon node to all term nodes that comprise its Bag-of-Terms representation. Inverse edges exist between every connected node pair so the graph may be viewed as undirected. Fig 5 illustrates the graph structure. A user is represented as a graph node that is connected to all its corresponding addon nodes with undirected edges. Each addon node, in turn, is connected to all its term nodes. In the specific example of Fig 5 , user 2 has two addons ( Babylon-addon and Conduit-toolbar ) that are connected to their term nodes ( Babylon , Conduit , addon and toolbar ). To construct the graph, the algorithm iterated over all the users in the dataset to create user nodes. For each user, it then iterated over all his/her addons, and mapped each addon to a unique node. Finally, each addon node was tokenized and lower-cased into single words, and each unique word mapped to a respective term node.

thumbnail

https://doi.org/10.1371/journal.pone.0179281.g005

Besides being compact, the graph representation is advantageous in that similar entities reside in high proximity to each other. Consider, for example, two addons “Skype-US” and “Skype-UK” that have non-identical names, but share the term “Skype” which indicates in this case that they are variants of the same addon. Fig 6 shows how term nodes help construct a connected graph where similar nodes are close to each other. Two disconnected segments on the left panel get connected to each other through the “Skype” term node, which leads to a close relatedness between User 1 and User 2 .

thumbnail

https://doi.org/10.1371/journal.pone.0179281.g006

Assessing research question 1. Can web browser habitat resilience be verified?

Our link prediction experiment for assessing the resilience of a browser habitat was designed as follows. A direct link between a user and an addon was randomly removed from the graph, and the identity of this “missing” addon was then predicted based on information about remaining addon members at that user’s environment, applying PPR to rank the addons by their graph-based association with the user’s environment. The objective of the experiment was to show that PPR could produce better results than an algorithm that ignored ecological memory, and did not model the user’s environment.

More formally, let U denote the set of users represented as nodes in the underlying graph G . Every individual user u ∈ U is linked in G to the set of addons installed on u ’s machine, A ( u ). Having disconnected the link between a random user u i and an addon a j ∈ A ( u i ), we wish to evaluate the extent to which the missing link between u i a j can be recovered based on the remaining information about the user’s environment A ( u i )′ = A ( u i )∖{ a j } and G . Using information retrieval terminology, in what follows we will refer to A ( u i )′ as a query . Candidate responses in this case are all addons that are not known to be associated with the user, i.e., A ∖ A ( u i )′, having this candidate set include the target response a j . These candidate nodes are ranked by their estimated relevance to the query. Accordingly, performance is evaluated quantitatively with respect to the rank of the ‘missing’ addon a j across multiple instantiated queries.

essay about web browser

Predicting the missing addon using PPR

essay about web browser

Experimental setup and evaluation

  • Pick uniformly at random a user node u from the graph.
  • Select the set of all addon nodes linked to u , A u .
  • Pick uniformly at random an addon node a from the set A u .
  • Remove the edge between nodes a and u in the graph.
  • Let the query V u be a uniform distribution over A u ∖{ a }, and the correct response to the query (the label ) be a .
  • Recall at rank k . This is the fraction of queries in which the relevant response is included among the top k ranks (see also [ 31 ]). Concretely, the non-interpolated recall at rank k of a given ranked list is defined to be 0 for each rank k = 0, …, k i − i , where k i is the rank that holds the single correct entry, and 1 for ranks k ≥ k i . The (mean) recall at rank k averages the recall scores at each rank k across the rankings of multiple queries. Thus, mean recall is in the range [0, 1] at each rank k . For example, if recall at rank 3 is 0.7, this means that for 70% of the queries the correct answer appears among the top 3 ranks of the generated ranked lists.

essay about web browser

To increase robustness, the above measures were applied to evaluate query sets that consisted of 1000 labeled examples of randomly sampled user-addon pairs per experiment. Each experiment was repeated 4 times. The mean of the 4 evaluation scores per query set is reported together with the standard deviation. All the experiments were run on a fast and memory-efficient implementation of PPR included in igraph [ 56 ], a software library optimized for the processing large-scale graphs. The experiments were run on a standard PC using the 64-bit version of igraph . The entire graph was loaded into memory. A batch of 1000 PPR runs was completed within a few hours. In the experiments, we set the reset probability parameter α = 0.85 following [ 57 ].

Results: Evaluating the effect of ecological memory

  • Popularity baseline (POP). This algorithm predicts the “missing” addon by ranking known addon items by their popularity which is determined by the total number of users associated with that addon.
  • PageRank baseline (PR). This algorithm computes for each addon its non-personalized PageRank score in the underlying graph. The PageRank scores reflect the structural centrality of addon nodes in the graph.

According to both of these non-personalized approach, all queries are presented with the same addon ranked lists (excluding the specific query addons). Table 2 shows the results of the experiment for two graph variants, with and without terms layer. The best results per configuration are marked in boldface in the table. Running t-tests shows that the means of PPR are significantly higher ( p < .0001) than POP and than PR in all the rows and columns in the upper half of Table 2 . Fig 7 shows recall-at- k results using PPR compared with ranking-by-popularity and by PageRank scores, demonstrating the relative performance of the algorithms.

thumbnail

https://doi.org/10.1371/journal.pone.0179281.t002

thumbnail

Left: recall at top ranks for the full graph, all the data. Right: recall at top ranks for the full graph, excluding the most popular addons.

https://doi.org/10.1371/journal.pone.0179281.g007

The lower half of Table 2 and the right hand side of Fig 7 show the results of a similar experiment over a graph variant in which high-degree nodes were removed. Those were defined as nodes with out-degree equal or greater than 500. This additional analysis was run because previous studies indicated that PageRank exhibits some bias in favor of high-degree nodes [ 58 , 59 ]. Other studies indicated that the removal of high-degree nodes from an undirected power-law graph leads to a small approximation error, while improving the computational cost of the random walk [ 60 ]. In these additional experiments the performance of POP plummets as the popular addons are removed from the graph and from the sampled test queries: recall-at-10 is nearly zero (0.006) and recall-at-100 is also very low (0.065). PR results are even lower. In contrast, PPR remains effective: recall-at-rank-10 is 0.405 using PPR, reaching 0.491 and 0.527 at ranks 50 and 100, respectively. (Again, running t-tests shows that the means of PPR are significantly higher (p<.0001) in comparison with POP and in comparison with PR.) This indicates that popular nodes, which tend to occupy the top ranks, indeed “push” relevant yet less popular nodes to lower positions in the ranked lists—this phenomena is especially dominant in the one-fits-all “non-personalised” ranking approaches.

In conclusion, the personalized PPR produced significantly better results than non-personalized methods. This revealed structural association between the addons installed on a user’s PC is strong enough to enable recovering the identity of an addon that was deliberately removed. The Web browser ecosystem is resilient in this respect, answering in the affirmative RQ1. The next section, addressing RQ2, will now look into one possible reason for that. Namely, that some addons are complimentary or in competition with each other. Such symbiosis and clash , respectively, might possibly be due to business alliances and rivalries.

Assessing research question 2. Measuring symbiosis and clash through PPR

Symbiosis in a Web browser habitat often occurs when addons of some companies are distributed via third parties. In such a process an addon’s installation is offered to a user as a part of some other product installation process. For example, a user installing Skype may by suggested to install also Skype ’s “Click to Call” addon in all browsers. Another example: at the time the data for this section was collected, Ask Toolbar installation was integrated with Java installation so that, during the installation of Java, users were prompted to download and install also Ask Toolbar [ 61 ]. A clash effect can be observed when addons of one company are removed when addons of another company are installed on the same machine or if addons are not installed at all when another company’s addons are pre-installed on that machine. For example, Kaspersky AntiVirus , which develops addons for all browsers, treats iMesh addons as threats and removes them from the computer [ 62 ].

Needless to say, the life cycle of the addon ecosystem is mostly obscure for an outside observer. While some symbiotic effects may be visible to users (e.g. an addon is prompted to be installed during an installation process of another addon or a software product), some other symbiotic effects are hidden (e.g. undisclosed agreements between addon distributors). Clash effects, on the other hand, are almost always invisible. Besides a few well known conflicts between competing addon distributors that were widely covered in mass media [ 63 ], such competition is mostly invisible.

In order to identify symbiosis and clash relationship among addons, eighteen prominent addon distributors were manually chosen, detailed in Table 3 . These companies are among the best known addons and toolbars distributors. Seven of the eighteen companies are antivirus and anti-malware companies. Although antivirus and anti-malware software aim to prevent unintentional addon installation, some antivirus companies are not only fighting unintentional addon installations but also distributing their own addons and toolbars. For example, AVG Antivirus distributes the AVG Security Toolbar which is detected by Avast Antivirus as malware. Indeed, in 2013 Avast Antivirus identified over 3.3 million different browser extensions for the three major browsers and published a list of the top ten companies whose addons were subject to removal [ 64 ]. Their updated list, published in 2015, did not change dramatically. Many of those companies are included in Table 3 . In a blog post of July 9 2015, Avast Antivirus described the addon environment of a user’s Web browser, much as this study does, as an ecosystem where “addons fight against each other” [ 65 ]. Based on Avast Antivirus statistics on the forced removals of competing toolbars, some companies in Table 3 are among the top ten offenders. For example, Conduit performed more than 13 million removals of their competitors’ toolbars, ASK removed 11 million toolbars—and other companies were not far behind. Avast Antivirus itself has been accused of doing the same: “Avast is contradicting itself. Their latest product offers a built-in feature to rid your browser of toolbars, while offering a toolbar when installing their software.” [ 66 ].

thumbnail

https://doi.org/10.1371/journal.pone.0179281.t003

Experimental design

To address RQ2, it was first necessary to identify the addon manufacturing company of each addon . An addon company often distributes hundreds or even thousands of addons. For example, Kaspersky URL Advisor Firefox addon and Kaspersky Protection Chrome extension are developed by the same company. Where possible, company name was identified within the addon installation path, name, or description. The default path of an addon package installation often contains the company’s name. And so, if a user does not change the default option, the company name will most probably be included in the addon’s path. For example, the addon path “C:\Program Files (x86)\Kaspersky Lab\Kaspersky Internet Security 2012\avp.dll” and its description “Kaspersky Protection extension” clearly show that the addon belongs to Kaspersky .

Having run that initial manual classification, PPR was applied to the original user, addon, and term graph to identify other addons that belong to one of the companies from Table 3 but were missed by the process in the previous paragraph. The procedure worked as follows. For each company in Table 3 , a PPR query was constructed to contain the set of addons that were identified as belonging to that company. We expected an addon that belongs to that company but was not included in the query to be ranked higher relative to its original rank in the (non-personalized) PageRank. In other words, an addon that is ranked close to a set of addons that is known to belong to a certain company is a candidate to be an addon of that company even if its name, path, or description do not contain that company name. In this process, a non-personalized PageRank was first run on the entire graph; this provided a baseline position for each addon. That being done, a PPR was run for each company to identify addons that substantially improved their position in the ranked list. For example, if an addon was ranked of 100 in the non-personalized PageRank but was ranked 10 in PPR, that addon was manually examined to verify if it indeed belongs to that company.

The above procedure was performed iteratively. After a new addon-to-company relationship was identified, that relationship was added to the query and the PPR was rerun on that extended query. This iterative process continued until no more addons dramatically changed their rank. In practice, two iterations were enough for the process to converge. This process identified 24 additional addons as associated with the target companies. A manual check revealed that all those 24 addons were correctly identified. An example of an addon that drastically changed its rank is the addon tbmyba.dll which jumped from being ranked 1,200 to being ranked 15 after running PPR with Babylon addons in the query. Indeed, tbmyba.dll belongs to Babylon [ 67 ].

Having linked the addons to their respective companies, the symbiosis and clash between addon companies in RQ2 could be assessed. This assessment was done by constructing a set of PPR queries, one for each company, which contained all addons of that company. The PPR output for a target company c i is a ranking of all the addons in the graph that reflects their association strength to c i . Addons of another company c j that are ranked high in that PPR result compared to their ranking in non-personalized PageRank might suggest that the two companies have been engaged in a partnership, a symbiosis . Likewise, addons of a company c j that are ranked considerably lower in the PPR ranking computed for company c i as query compared to their position in non-personalized PageRank might suggest that the two companies clash with each other.

Symbiotic relationships as addon set overlaps

essay about web browser

https://doi.org/10.1371/journal.pone.0179281.t004

Identifying symbiotic and clash relationships via personalized pagerank

We argue that an alternative and potentially better method to identify symbiotic relationships is to apply a graph-based measure of relative importance using personalized PageRank. As in RQ1, if companies are in a symbiosis or a clash, then the relationships among their addons should reveal that. Provided with a query that consists of all addons of a company, PPR should increase or decrease scores of other companies’ addons as compared to their non-personalized PR scores. A substantial increase in the scores of a company’s addons should indicate its symbiosis with the query company, a marked decrease might tell that of a clash between them.

essay about web browser

Similarly, for each company c i in the PPR query, the algorithm computed the expected PPR score of every other company c j , and compared those scores with the original, non-personalized expected PR scores. Table 5 shows the relative importance : the ratio between the expected PPR score of company c j given a query company c i and the expected non-personalized PR score of c j . Red cells (low ratios) suggest a clash between the companies, and green cells (high ratios) a symbiosis. The ratios in Table 5 are non-symmetric, i.e. the expected PPR score of company c 1 can decrease when c 2 is in the query, while the expected PPR score of c 2 can increase when c 1 is in the query. This may indicate a complex relationship between the two companies: c 1 and c 2 can sign a contract according to which c 1 helps distributing addons of c 2 , however c 2 does not have to help distributing addons of c 1 . Moreover, c 2 may even end up removing c 1 ’s addons.

thumbnail

Columns are Companies in PPR Queries.

https://doi.org/10.1371/journal.pone.0179281.t005

The red and green coloring in Table 5 is based on setting score ratio cutoffs at .6 for clashes and 1.02 for symbioses. The rationale behind setting those cutoffs is based on the results of a Kernel Density Estimation (Gaussian kernel, bandwidth = 0.1) performed on the distribution of values in Table 5 . Fig 8 shows the Kernel Density Estimation output with the .6 and 1.02 cutoffs superimposed on it. The two cutoff values were determined by eyeballing the transition points in the Kernel Density Estimation graph. The range from .4 up to .6 resembles the beginning of a seemingly normal distribution, climbing to a peak and then declining. The range from .6 to 1.02 shows a considerably more gradual decline, with some wrinkles, suggesting that this is another strata in the values in Table 5 . The range above 1.02 shows a straightening out of the graph. As there are no guideline on choosing the transition points in a Kernel Density Estimation graph, alternative ranges, suggesting other transition points, were also tried. Making the first transition point at .55 where the graph ends its initial incline and starts declining or making the second transition point more to the right of the 1.02 mark resulted in only minor changes in the coloring pattern in Table 5 .

thumbnail

Red vertical lines are score ratio cutoffs for clash (left) and symbiotic (right) relationships.

https://doi.org/10.1371/journal.pone.0179281.g008

To verify the implied meaning behind the transition points in Fig 8 , and the resulting coloring pattern in Table 5 , a sample of the implied clashes and symbioses were examined. Market behavior seems to support the implied classification of symbioses and clashes. For example, the lowest implied symbiotic score ratio at 1.02 refers to the ratio of Softonic in IncrediMail ’s PPR. A symbiosis could be expected between these companies. Because Softonic develops email addons, IncrediMail might be Softonic ’s distributor. Indeed, Softonic ’s “PostSmile works with the most popular email programs, including Outlook, Outlook Express, Eudora, Thunderbird, IncrediMail, AOL Mail and many others” [ 69 ]. However, even if Softonic was installed on a user’s machine, it could have arrived there through another email client—and so, IncrediMail ’s score ratio is only 0.7 in Softonic’s PPR.

Another prominent example is the partnership between IncrediMail and Conduit . This symbiosis is well known (Conduit eventually acquired IncrediMail—now called Perion [ 70 ]). The score ratio of 2.04 of Conduit in IncrediMail ’s PPR would suggest that if IncrediMail is installed, Conduit ’s addons might be found as well. However, the opposite is not true. The coloring in Table 5 can also reveal less known symbioses. Conduit and Babylon have long been considered competitors. However, the score ratio of 1.64 of Conduit in Babylon ’s PPR lifts a curtain over a possibly well-hidden agreement between the two companies. Indeed, Conduit ’s and Babylon ’s toolbars tend to appear together [ 71 ].

Another revealing result in the coloring of Table 5 is the implied relationship between Avira Antivirus and ASK . A collaboration between Avira Antivirus and the toolbar distributor ASK appears counterintuitive. A toolbar company is unlikely to promote the addons of an antivirus company considering that antiviruse software often treats toolbars as spyware. Nevertheless, ASK ’s score ratio in Avira Antivirus ’s PPR is 1.23. This suggests that when Avira Antivirus is installed there is a higher probability of ASK addons being found. Indeed, there is a market symbiosis between these two companies. Avira Antivirus ’s official website states that “Avira chose Ask.com to be our partner in bringing you the SearchFree Toolbar” [ 72 ]. Likewise, the high score ratio of iMesh in Avira Antivirus ’s PPR is intriguing. Here too, a discussion on the official Avira website might hint about a connection between the two companies [ 73 ].

Score ratios below 0.6 may indicate clashes between competing companies. It is known that the addon space is very competitive and that many clashes occur. For example, the majority of antivirus products clash with each other. Since in most cases two antivirus software products cannot coexist on the same computer, when one product is installed, the other often gets uninstalled. Likewise, antivirus software tends to remove toolbars and other addons. In Table 5 , columns corresponding to major antivirus companies, such as Kaspersky and Norton , contain many red cells. This implies that wherever Kaspersky or Norton is installed, other antiviruses and toolbars are rarely seen. Interestingly, smaller antivirus companies, such as Avira and TrendMicro , have many red cells in corresponding rows. It is remarkable that the free antivirus tool, AVG , appears to live in harmony with other companies without seeming to remove toolbars or browser addons [ 74 ]. In the toolbar domain, ASK and Google appear to be the largest offenders. As discussed at the beginning of this section, ASK is known for removing millions of rival toolbars, while “Google toolbar … prevents other toolbars being installed into your computer” [ 75 ].

Summary of results

The ecosystem of the Web browser is a mostly unexplored research area. This research is the first to the best of our knowledge to measure this dynamic environment with its important economic and security consequences. Its importance is highlighted by the observation that addon manufacturers engage in partnerships and compete with each other by supporting and suppressing the distribution of each other’s addons. As most of this dynamics is hidden from the eyes of ordinary Web users, this activity also has serious privacy issues involved. Being able to measure these activities can open the door to at least partially monitoring these activities and potentially alleviate some of the privacy issues. Accordingly, the goal of this study was to develop tools and methodologies for measuring activity in this complex ecosystem.

The study, analyzing a unique dataset of addons installed on almost a million machines, applied Personalized PageRank (PPR) to capture relationships between vertexes in the graph. The results show that the Web browser ecosystem can be tested to identify removed addons. Armed with this observation (RQ1) and with the methodology developed to test RQ2, the results show that symbiosis and clashes can be identified, and better so with PPR than with a non-personalized method. The results show that some companies are engaged in symbiotic relations—when one company’s addons are installed on a machine, there is a good chance that another company’s addons will be there too. Other companies clash with each other in the addon ecosystem—seemingly removing the addons of specific other companies.

Direct implications

The ability to measure the extent that addons tend to appear or not to appear together suggests a method to actively detect symbioses and clashes between addon distributing companies. This could have important manifestations for regulators and for addon companies. Information about a symbiosis between two companies can help better analyze the powers and driving forces in the addon ecosystem, and so allow competitors to better prepare and regulators to better regulate it. Companies placing addons could benefit by knowing in reality which other companies support them and which oppose them by monitoring the actual way in which addons apparently suppress or distribute their addons. Importantly, regulators could gleam insight into actual addon ecosystem behavior to identify economic oligarchies, often regulated in other economic environments, and so ensure more open competition. And, from a user perspective, regardless of the legality of removing or adding new addons without user approval, being able to monitor these addon clashes and symbioses could go a long way towards building a trustworthy and open addon ecosystem.

  • Such a clash may imply that the user prefers another company’s addon over their own addon, so there might be a way to perform a comparative analysis of the two addons and learn how to improve their value proposition.
  • The clash could mean that a newly installed addon is hostile to other addons in a potentially illegal way, i.e. it is the addon—and not the user—that uninstalls or sabotages another addon. If so, the distributor of the removed addon could report an abuse.
  • An addon manufacturer can ask a third-party distributing company not to install an addon on a machine that keeps the hostile addon. Since in many cases addon developers pay distributors per install, this could decrease the developers’ costs and raise their profits in the long run.
  • A clash can occur between addons of seemingly non-competing companies. This may happen when something goes wrong in the distribution process and the problem slips off the company’s radar. In this case, precise information about unexpected clashes might help the affected company quickly fix the problem.
  • The addon ecosystem may be so complex that distribution monitoring is barely possible. If an unintended clash is detected, the owner of the affected addon can contact the owner of the hostile addon and ask them to act.

Broader implications and agency relationships

On a broader perspective, measurably implied clashes and symbioses in the addon ecosystem might suggest that lessons learnt from other kinds of economic ecosystems might apply to the addon ecosystem too. A perhaps pertinent example of this is Agency Theory [ 76 ]. Agency theory deals with contractual relationships between principals and agents who might be individuals or company representatives. The agency theory perspective is central to understanding when and how people and companies contract with each other [ 77 ]. In agency theory, a principal lets out work to an agent in a context of information asymmetry characterized by the agent knowing more about its own capabilities and actions than the principal can possibly know. This opens up the principal (who in the case of the addon ecosystem is the user allowing companies to install addons on his/her machine) to several risk categories from the agents, who in this case are the companies installing those addons. These risks are classified in agency theory into three broad categories widely known as adverse selection risks, moral hazard risks, and unforeseen contingencies. Adverse selection risks are risks associated with not knowing enough about the agents competing on the contract before awarding it to one of them. Often, principals are not fully aware of the capabilities or track record of the competing agents when choosing among them. This allows agents to oversell their capabilities and to masquerade as something they are not. Moral hazard risks are risks associated with the actions of the agent who has been awarded the contract to do the work. Typically, principals are not capable or do not have the resources to carefully oversee everything an agent is doing for them. This allows the agent to take advantage of the principal without the principal being aware of it. Unforeseen contingencies refer to the cost of dealing with unexpected events that were not included in the contract.

In the case of an addon ecosystem, adverse selection risks deal with users not fully knowing the consequences of their granting permission to addon companies to install addons and their inability to investigate the capabilities of those companies and what they are really after. The fact that users may not even realize that they should monitor the addon companies before granting them permission, let alone even being able to know how to do so, increases the magnitude of such adverse selection risks. Moral hazard risks in an addon ecosystem may entail precisely the kind of hidden symbioses and clashes investigated in this study. Specifically, it would seem that addon companies are adding and removing other addons without the knowledge or consent of the user. Activities concerning the principal that are done without his/her knowledge or consent are typical of moral hazard risks. Importantly, in this context, at least some adverse selection and moral hazard risks can be somewhat alleviated by the principal being able to—or, as often happens in the real world, by a regulating agency or by other competing agents being able to—measure the behavior of the agents. Knowing of preexisting clashes and symbioses in the addon ecosystem could inform users of the possibility that the addons may do more than the user expects them to do. Knowing of such relationships through monitoring the ecosystem, as shown in RQ2, could at least partly inform the principals of the potential for such a risk. As RQ1 shows, knowing that such an activity actually occurred, moral hazard can also be identified.

Applying the agency theory perspective may allow the transformation of the addon ecosystem into a mature marketplace. Within the agency theory context, the ability to control—or at least to measure—some of the adverse selection and moral hazard categories of risk is a key determinant of the price of the contract and whether the contract will be a fixed price or a time and materials one [ 78 ]. Applying agency theory to the context of addons might suggest that being able to measure the way addons interact with each other, i.e. measure agent behavior in removing and installing other addons, may affect the pricing of those services and, once a market for such measurements becomes viable, also the very nature of the contracting. Even if users cannot be expected to apply the type of algorithms shown in this paper, regulators and competing companies can be expected to. Regulators can be expected to take action if competition in the market is reduced. Competing companies can be expected to take action if their profit margins or access to information about potential clients will be affected by having their own addons removed.

Once such algorithms are applied and such agency risks identified, as in other agency theory contexts, users could demand discounts or rebates for allowing addons to be installed or removed from their machines. This is much as customers currently expect from using their loyalty cards that allow stores to track their purchase activities. As with loyalty cards, users could demand discounts or rebates because their privacy exposure is increased by those activities. The industry already has a well-established market for paying other websites to direct traffic their way. Users could demand a cut of that profit as compensation for being tracked. Likewise, users could demand bonuses or rebates because of their increased exposure to a larger pool of addon companies through the automatic installing of addons by companies in symbioses with each other. Being able to measure such activity, as shown in this paper, is the first step towards making such a transformation.

The methodology proposed in this study investigates a previously unexplored domain of Web browsers and the ecosystem of their addons. The results show that in the Web browser ecosystem addons have symbiotic and clash relationships. The process described in this paper could allow a method to detect, and in doing so also regulate, this ecosystem with limited manual intervention. This could transform the current unwieldy addon ecosystem to a more traditional agency type market.

Author Contributions

  • Conceptualization: EM RB DG.
  • Data curation: SF.
  • Formal analysis: SF EM RB DG.
  • Investigation: SF.
  • Methodology: SF EM RB DG.
  • Project administration: EM RB.
  • Software: SF EM RB.
  • Supervision: EM RB DG.
  • Validation: EM RB DG.
  • Visualization: SF EM RB DG.
  • Writing – original draft: SF EM RB DG.
  • Writing – review & editing: EM RB DG.
  • 1. http://www.chromium.org/chromium-os ; Retrieved on June 1, 2017.
  • 2. http://www.medianama.com/2012/06/223-the-lowdown-google-io-2012-day-2-310m-chrome-users-425m-gmail-more/ ; Retrieved on June 1, 2017.
  • 3. Elzer S, Schwartz E, Carberry S, Chester D, Demir S, Wu P. A Browser Extension for Providing Visually Impaired Users Access to the Content of Bar Charts on the Web. In: WEBIST (2); 2007. p. 59–66.
  • 4. Ross B, Jackson C, Miyake N, Boneh D, Mitchell JC. Stronger Password Authentication Using Browser Extensions. In: Usenix security. Baltimore, MD, USA; 2005. p. 17–32.
  • 5. https://www.google.com/patents/US8375131 ; Retrieved on June 1, 2017.
  • 6. Leontiadis I, Efstratiou C, Picone M, Mascolo C. Don’t kill my ads!: balancing privacy in an ad-supported mobile application market. In: Proceedings of the Twelfth Workshop on Mobile Computing Systems & Applications. ACM; 2012. p. 2.
  • 7. http://seekingalpha.com/article/1147451 ; Retrieved on June 1, 2017.
  • 8. http://finance.yahoo.com/news/google-may-miss-2013-revenue-113926474.html ; Retrieved on June 1, 2017.
  • 9. https://support.google.com/adwordspolicy/answer/50423?hl=en ; Retrieved on June 1, 2017.
  • 10. Russell S Dewey D, Tegmark M. Research priorities for robust and beneficial artificial intelligence. arXiv preprint arXiv:160203506. 2016;.
  • 11. Dix A. Human-computer interaction. Springer; 2009.
  • View Article
  • Google Scholar
  • 16. Adamic LA, Lento TM, Adar E, Ng PC. Information Evolution in Social Networks. In: Proceedings of the Ninth ACM International Conference on Web Search and Data Mining (WSDM); 2016.
  • PubMed/NCBI
  • 19. Messerschmitt DG, Szyperski C. Software ecosystem: understanding an indispensable technology and industry. MIT Press Books. 2005;1.
  • 20. Dhungana D, Groher I, Schludermann E, Biffl S. Software ecosystems vs. natural ecosystems: learning from the ingenious mind of nature. In: Proceedings of the Fourth European Conference on Software Architecture: Companion Volume. ACM; 2010. p. 96–102.
  • 22. Jansen S, Brinkkemper S, Cusumano MA. Software Ecosystems: Analyzing and Managing Business Networks in the Software Industry. Edward Elgar Publishing; 2013.
  • 23. Lungu MF. Reverse Engineering Software Ecosystems. University of Lugano. Lugano, Switzerland; 2009.
  • 24. Blincoe K, Harrison F, Damian D. Ecosystems in GitHub and a method for ecosystem identification using reference coupling. In: Proceedings of the 12th Working Conference on Mining Software Repositories (MSR); 2015. p. 202–207.
  • 27. Holling CS. Resilience and stability of ecological systems. Annual review of ecology and systematics. 1973; p. 1–23.
  • 28. Tilman D, Downing JA. Biodiversity and stability in grasslands. In: Ecosystem Management. Springer; 1996. p. 3–7.
  • 29. Gunderson LH. Ecological resilience–in theory and application. Annual review of ecology and systematics. 2000; p. 425–439.
  • 32. Sun Y, Han J, Aggarwal CC, Chawla NV. When will it happen?: relationship prediction in heterogeneous information networks. In: Proceedings of the ACM International Conference on Web Search and Data Mining (WSDM); 2012.
  • 35. https://github.com/ ; Retrieved on June 1, 2017.
  • 39. Leskovec J, Huttenlocher D, Kleinberg J. Predicting positive and negative links in online social networks. In: Proceedings of the 19th international conference on World wide web. ACM; 2010. p. 641–650.
  • 41. Page L, Brin S, Motwani R, Winograd T. The PageRank citation ranking: Bringing order to the web.; 1999.
  • 44. Haveliwala TH. Topic-sensitive pagerank. In: Proceedings of the 11th international conference on World Wide Web. ACM; 2002. p. 517–526.
  • 45. Weng J, Lim EP, Jiang J, He Q. Twitterrank: finding topic-sensitive influential twitterers. In: Proceedings of the third ACM international conference on Web search and data mining. ACM; 2010. p. 261–270.
  • 46. Lee S, Song Si, Kahng M, Lee D, Lee Sg. Random walk based entity ranking on graph for multidimensional recommendation. In: Proceedings of the fifth ACM conference on Recommender systems. ACM; 2011. p. 93–100.
  • 47. Agirre E, Soroa A. Personalizing PageRank for Word Sense Disambiguation. In: Proceedings of the 12th Conference of the European Chapter of the ACL (EACL); 2009.
  • 48. Freschi V. Protein function prediction from interaction networks using a random walk ranking algorithm. In: Bioinformatics and Bioengineering, 2007. BIBE 2007. Proceedings of the 7th IEEE International Conference on. IEEE; 2007. p. 42–48.
  • 50. Yang Y, Tang J, Keomany J, Zhao Y, Li J, Ding Y, et al. Mining Competitive Relationships by Learning across Heterogeneous Networks. In: Proceedings of the ACM International Conference on Information and Knowledge Management (CIKM); 2012.
  • 51. Kunegis J, Lommatzsch A, Bauckhage C. The PageTrust algorithm: How to rank web pages when negative links are allowed? In: Proceedings of the International World Wide Web Conference (WWW); 2008.
  • 52. de Kerchove C, Dooren PV. The PageTrust algorithm: How to rank web pages when negative links are allowed? In: Proceedings of the 2008 SIAM International Conference on Data Mining (ICDM); 2008.
  • 53. Jeh G, Widom J. Scaling personalized web search. In: Proceedings of the 12th international conference on World Wide Web. ACM; 2003. p. 271–279.
  • 54. Fogaras D, Rácz B. Towards scaling fully personalized pagerank. In: Algorithms and Models for the Web-Graph. Springer; 2004. p. 105–117.
  • 55. Voorhees EM, et al. The TREC-8 Question Answering Track Report. In: TREC. vol. 99; 1999. p. 77–82.
  • 57. Boldi P. TotalRank: Ranking without damping. In: Special interest tracks and posters of the 14th international conference on World Wide Web. ACM; 2005. p. 898–899.
  • 58. Tong H, Faloutsos C. Center-piece subgraphs: problem definition and fast solutions. In: Proceedings of the 12th ACM SIGKDD international conference on Knowledge discovery and data mining. ACM; 2006. p. 404–413.
  • 59. Budalakoti S, Bekkerman R. Bimodal Invitation-Navigation Fair Bets Model for Authority Identification in an Online Social Network. In: Proceedings of the 21st International World Wide Web Conference; 2012.
  • 60. Sarkar P. Tractable algorithms for proximity search on large graphs. DTIC Document; 2010.
  • 61. https://www.intego.com/mac-security-blog/inside-the-ask-toolbar-installed-with-java-for-mac/ ; Retrieved on June 1, 2017.
  • 62. http://securelist.social-kaspersky.com/en/kadvisories/KLA10420 ; Retrieved on June 1, 2017.
  • 63. https://finance.yahoo.com/news/babylon-shares-jump-yahoo-sticks-125203474.html ; Retrieved on June 1, 2017.
  • 64. https://blog.avast.com/2013/03/20/avast-browser-cleanup-at-work/ ; Retrieved on June 1, 2017.
  • 65. https://blog.avast.com/2015/07/09/top-10-most-annoying-browser-toolbars/ ; Retrieved on June 1, 2017.
  • 66. http://techdows.com/2012/11/avast-comes-bundled-with-google-toolbar.html ; Retrieved on June 1, 2017.
  • 67. http://www.file.net/process/tbmyba.dll.html ; Retrieved on June 1, 2017.
  • 69. http://postsmile.en.softonic.com/ ; Retrieved on June 1, 2017.
  • 70. https://techcrunch.com/2013/09/16/conduit-worth-1-4bn-acquires-email-startup-perion-worth-153m/ ; Retrieved on June 1, 2017.
  • 71. http://www.makeuseof.com/answers/remove-babylon-conduit/ ; Retrieved on June 1, 2017.
  • 72. https://www.avira.com/en/avira-searchfree-toolbar ; Retrieved on June 1, 2017.
  • 73. https://answers.avira.com/fr/question/legal-music-free-download-6251 ; Retrieved on June 1, 2017.
  • 74. https://answers.yahoo.com/question/index?qid=20080325142719AAnERkV ; Retrieved on June 1, 2017.
  • 75. http://techdows.com/2009/10/18-advantages-of-using-google-toolbar.html ; Retrieved on June 1, 2017.
  • 77. Bolton P, Dewatripont M. Contract theory. MIT press; 2005.
  • 78. Gefen D, Wyss S, Lichtenstein Y. Business familiarity as risk mitigation in software development outsourcing contracts. MIS quarterly. 2008; p. 531–551.
  • Artificial Intelligence
  • Generative AI
  • Business Operations
  • Cloud Computing
  • Data Center
  • Data Management
  • Emerging Technology
  • Enterprise Applications
  • IT Leadership
  • Digital Transformation
  • IT Strategy
  • IT Management
  • Diversity and Inclusion
  • IT Operations
  • Project Management
  • Software Development
  • Vendors and Providers
  • Enterprise Buyer’s Guides
  • United States
  • Middle East
  • España (Spain)
  • Italia (Italy)
  • Netherlands
  • United Kingdom
  • New Zealand
  • Data Analytics & AI
  • Newsletters
  • Foundry Careers
  • Terms of Service
  • Privacy Policy
  • Cookie Policy
  • Copyright Notice
  • Member Preferences
  • About AdChoices
  • Your California Privacy Rights

Our Network

  • Computerworld
  • Network World

essay about web browser

The modern browser is under attack: Here’s how to protect it

Prioritizing enterprise browsers and embracing advanced sase technologies helps organizations safeguard digital assets and ensures resilience in an increasingly interconnected and dynamic digital environment..

SEO Search Engine Optimization, Searching for information on Internet with smart phone, using website search bar to find the desired data, global data connection, shopping online.

The modern web browser has undergone a profound transformation in recent years, becoming an indispensable tool in today’s digital age. It facilitates online communication and provides unparalleled productivity, especially as organizations continue to transition to hybrid work models and embrace cloud-based operations. Unfortunately, security infrastructures haven’t evolved as fast as they should, making these browsers prone to attacks.

The secure access service edge (SASE) framework, however, presents a unique opportunity for enterprises. Its holistic approach to cybersecurity integrates wide-area networking and security services into a unified cloud-delivered platform. Incorporating enterprise browsers into SASE architectures has bolstered security by providing potent, comprehensive protection tailored to the unique challenges posed by modern web usage.

Web application use at a tipping point

Despite approximately 85- 100% of the workday taking place within web browsers, many enterprises lack security robust enough to respond to threats. In fact, in a recent Palo Alto Networks survey , a staggering 95% of respondents reported experiencing browser-based attacks in the past 12 months, including account takeovers and malicious extensions. The concern becomes even more alarming when you consider that businesses already operate approximately 370 web and SaaS applications, with organizations anticipating a 50% surge in application use over the next 24 months.

This influx of vulnerable browsers and applications can have severe consequences for enterprises, including data breaches, financial losses, and reputational damage. For instance, account takeovers can result in unauthorized access to sensitive information, allowing attackers to steal data or disrupt operations. Malicious browser extensions can introduce malware, exfiltrate data, or provide a backdoor for further attacks. Data breaches can even lead to regulatory penalties, loss of customer trust, and significant financial costs associated with remediation and recovery efforts.

As these threats become more sophisticated, the potential impact on enterprises becomes more severe, necessitating more refined and comprehensive security strategies. Enterprise browser-based SASE enables real-time detection and prevention of threats in the browser as they arise. Advanced threat intelligence and machine learning algorithms detect anomalies, phishing attempts, malicious file upload and download, and malware infections. Threats like these require a proactive approach to security, ensuring potential issues are addressed before a network is compromised.

Hybrid work model and the challenge of personal devices

The shift to a hybrid work model has led to the widespread use of personal devices for accessing corporate applications. Nearly 90% of organizations enable employees access to some corporate applications and data from their personal devices. Personal devices, though, lack the stringent security controls of corporate devices, making them prime targets for cyberattacks. Over 80% of successful ransomware attacks originate from these unmanaged devices.

SASE enforces Zero Trust principles , ensuring that every access to SaaS, web, and GenAI apps is authenticated and authorized. Zero Trust Network Access continuously verifies users and devices before granting access to corporate applications, significantly reducing the risk of unauthorized access and data breaches. By extending SASE protections through an enterprise browser, personal devices receive a similar level of security that corporate-managed devices do.

Phishing attacks and organizational vulnerability

Phishing remains a pervasive threat, with 94% of organizations experiencing such incidents over the last year. Strengthening defenses against these threats is crucial to safeguarding sensitive data and maintaining organizational resilience.

SASE automatically detects and filters out phishing. By scanning links, websites, and files, SASE can identify and block phishing websites and other attempts. Furthermore, SASE’s Data Loss Prevention (DLP) capabilities monitor data flows and apply policies to prevent unauthorized data transfers and protect sensitive information from being exfiltrated following a successful phishing attack, from the SaaS service side. This way, DLP ensures minimal impact on critical data, even if a phishing attack succeeds.

Financial impact of threats on unmanaged devices

Securing the modern browser isn’t just about protecting data; it’s about protecting an organization’s bottom line. Nearly one-third of companies say losses on poorly managed or unmanaged devices are higher in terms of financial cost/business impact than all other security incidents. What’s more troubling is that despite all the tools available to address cybersecurity challenges, 53% of organizations expressed a lack of confidence in their ability to address security issues on unmanaged or poorly managed devices.

Upgraded device security and management tactics are essential to reduce the financial and operational impacts of such threats. SASE solutions significantly decrease the risk of costly breaches and provide enhanced security posture overall.

Future trends and considerations

The mismatch between the expanding use of web browsers and their stagnant security measures highlights the need for urgent action. Future trends indicate an increasing reliance on AI-driven security measures and emphasize the importance of combining security tools within a unified SASE platform.

By integrating an enterprise browser into a SASE framework, organizations reap the benefits of unified visibility across all of their devices, managed and unmanaged, AI-powered security from the app to the browser, and increased ease of operations with the ability to apply a single policy across all apps in a unified console.

To learn more, visit us here .

Related content

Bridging the gap between legacy tools and modern threats: securing the cloud today, is there a natural contradiction within ai-driven code in cloud-native security, what cios need to know about the newly proposed critical infrastructure cyber incident reporting rule, m&a action is gaining momentum, are your cloud security leaders prepared, from our editors straight to your inbox, show me more, nvidia’s advanced chips have no trouble getting into china, despite us restrictions.

Image

5 myth-busting facts about AI in the workplace

Image

Empower your workforce to embrace AI

Image

CIO Leadership Live Middle East with Kenan Begovic, Group Director of Information Security, beIN Media Group

Image

Pacific Coast Companies CIO Marty Menard on leveraging vendor partners

Image

CIO Leadership Live UK with Elizabeth Akorita, Group Deputy Director, Digital Delivery, Department for Science and Innovation and Technology

Image

Sponsored Links

  • Everyone’s moving to the cloud. Are they realizing expected value?
  • Everybody's ready for AI except your data. Unlock the power of AI with Informatica
  • Visibility, monitoring, analytics. See Cisco SD-WAN in a live demo.
  • The cloud shouldn’t be complicated. Unlock its potential with SAS.

We use cookies to enhance our website for you. Proceed if you agree to this policy or learn more about it.

  • Essay Database >
  • Essays Samples >
  • Essay Types >
  • Case Study Example

Web Browser Case Studies Samples For Students

14 samples of this type

During studying in college, you will surely have to compose a bunch of Case Studies on Web Browser. Lucky you if linking words together and organizing them into relevant text comes easy to you; if it's not the case, you can save the day by finding an already written Web Browser Case Study example and using it as a model to follow.

This is when you will certainly find WowEssays' free samples directory extremely helpful as it includes numerous skillfully written works on most various Web Browser Case Studies topics. Ideally, you should be able to find a piece that meets your requirements and use it as a template to build your own Case Study. Alternatively, our competent essay writers can deliver you a unique Web Browser Case Study model written from scratch according to your individual instructions.

The Conclusion Section On Web Browser Research Case Study Example

Server-side scripting languages case study example.

There are many reasons why programmers want to use server-side scripting in their projects. Accessibility where users can reach the web content using any browser, any device and anywhere, the manageability where codes can be changed easily, security since the source code is not exposed and the web-based 3-tier architecture that enables scalability are some of the reasons that programmers choose server-side scripting.

Below is an evaluation of the three server-side scripting languages

- JSP/Servlets - ASP/ASP.net - Python

JSP/Servlet

Lab2 answers case study example.

Don't waste your time searching for a sample.

Get your case study done by professional writers!

Just from $10/page

Variables Used Case Study

.University

Free Case Study On Google Today And In The Future

Q1 four main products of google, example of case study on online threats, institutional, sales force cloud computing case study.

Cloud computing involves delivery of computer software or hardware through internet. The end users of software or the hardware receive this application via a web browser. Cloud computing enables a firm to reduce the cost associated with the infrastructure. The firm is relieved the cost of developing of the infrastructure. As a result, the firm will be in a position to concentrate on other factors that affect the success of a business, to remain competitive in business, Salesforce.com has embraced cloud computing. The firm leases its software as opposed to other firm, which sell their software and hardware.

Case Study On Social Networking

Social networking timeline, free basic concept of dithering case study sample, image dithering, case study on gyour name, web application security, example of case study on new payroll application architecture.

(Study Program)

Case Study On Nsecurity, Firewalls, And VPN Solutions

The mobile commerce explosion case studies example, introduction, free case study on computer security.

Password recovery email has been sent to [email protected]

Use your new password to log in

You are not register!

By clicking Register, you agree to our Terms of Service and that you have read our Privacy Policy .

Now you can download documents directly to your device!

Check your email! An email with your password has already been sent to you! Now you can download documents directly to your device.

or Use the QR code to Save this Paper to Your Phone

The sample is NOT original!

Short on a deadline?

Don't waste time. Get help with 11% off using code - GETWOWED

No, thanks! I'm fine with missing my deadline

Get the Reddit app

SpinAI

https://dsc.gg/rchatgpt

Subreddit to discuss about ChatGPT and AI. Not affiliated with OpenAI. Thanks Nat!

Asked ChatGPT "Can you write an essay about web browsers but using a tone of a depressed teenager?" And this was the result

Ugh, like, seriously? Web browsers, man, they're just like another thing we have to deal with in this messed up digital world. It's like, we can't even escape the internet, even if we wanted to. I mean, sure, web browsers are supposed to be all convenient and stuff, making it, like, easier for us to, you know, "surf" the internet. But honestly, it's just one more thing on my list of things that stress me out.

First off, there's like a gazillion web browsers out there. Chrome, Firefox, Safari, Edge – it's like a popularity contest or something. Everyone's like, "Use this one, it's the best," but then someone else is like, "Nah, that one sucks, use this one." And, like, who even has the energy to care anymore? It's like trying to pick the coolest outfit to wear, but for your computer. Can't I just, like, not care and use whatever comes pre-installed?

And then there's all these updates. Like, seriously, can't they just leave me alone? Every time I want to, like, chill and watch some random videos, there's this annoying pop-up telling me to update my browser. And I'm like, "Dude, I just updated you, like, last week. Can you give me a break?" It's like they're trying to control my life or something.

And don't even get me started on cookies and privacy stuff. Like, I get it, they want to, like, track my every move and sell my info to, like, who knows who. But, hello, can a girl get some privacy? It's like they're stalking me, following me around, knowing all my secrets. It's not like I asked for this, you know?

Oh, and the extensions. Like, yeah, I get it, they're supposed to, like, make my life easier or whatever. But honestly, half the time I don't even know what they do. It's like going to a party and not knowing anyone – you're just there, hoping for the best. And, of course, they slow down my browser, because why not add more misery to my life?

But you know what's the worst? Those error messages. It's like the browser is mocking me, laughing at my inability to even load a simple webpage. "Aw, you wanted to check your favorite blog? Too bad, here's an error message just for you." It's like they're out to get me, to ruin my day, to remind me that technology hates me.

So, yeah, web browsers are just another thing in this never-ending cycle of frustration and confusion. They're like that friend who always says they'll be there for you but ends up ghosting you when you really need them. Maybe one day they'll figure themselves out and actually make my life better. But until then, I'll just be here, rolling my eyes every time I open a new tab. Whatever, man.

PS Portal web browser

Lack of PS Portal Web Browser Continues to Annoy Players

By Zarmena Khan

The lack of a PS Portal web browser continues to vex players nearly eight months after the remote player launched. The issue keeps cropping up from time to time as players use workarounds to access a hidden browser tucked away in the console’s settings, but its use is limited and cumbersome.

How to access PS Portal’s hidden web browser

As explained by folks over at XtremePS3 , PS Portal’s “hidden” browser can be accessed by going to the handheld’s settings, selecting “legal notices,” and then “other documents,” which open up in a browser. However, Sony has severely limited its use. Frankly, it’s less frustrating to not use it at all.

The omission was particularly annoying when players wanted to use their PS Portal in public, and network access required a web browser. People previously used workarounds like using a phone to connect to public WiFi, and then use the phone as a hotspot. It was only recently that Sony added a QR authentication method to resolve this issue.

The allure of PS Portal is to connect to your PS5 and play games while away from the console. That said, it would be nice to have a simple web browser for some basic tasks outside of gaming.

Zarmena Khan

Zarmena is a senior editor at PSLS. She has been with the site since 2014.

Share article

essay about web browser

The new Microsoft Edge is here

The new Microsoft Edge is here and now available to download on all supported versions of Windows, macOS, iOS and Android.

essay about web browser

Develop extensions for Microsoft Edge

Microsoft Edge is built on Chromium and provides the best-in-class extension and web compatibility. Learn how to begin and get your extensions onto the Edge Add-ons website.

essay about web browser

Become a Microsoft Edge Insider

Want to be the first to preview what’s new in Edge? Insider channels are continuously updated with the latest features, so download now and become an Insider.

Web Platform

Elevate the browsing experience by customizing it with extensions.

essay about web browser

Enhance existing websites with native app-like experiences.

essay about web browser

Debug and automate the browser using powerful tools for web developers.

essay about web browser

Embed web content (HTML, CSS, JavaScript) in your native applications.

Microsoft Edge Blog

Read the latest on our vision to bring Microsoft Copilot to everyone and more.

Microsoft Edge videos for developers

Check out our video library to learn about the latest web developer tools and APIs available to you.

What’s New in the DevTools

Check out the latest features in the Microsoft Edge DevTools.

essay about web browser

Tools, references, guides and more

Discover the tools that will help you to build better websites. Scan your site with WebHint, check the accessibility of your site with the Microsoft Accessibility Tool Extensions, or download a sample of the WebView2 SDK.

  • * Feature availability and functionality may vary by device type, market, and browser version.
  • skip navigation Telerik UI for Blazor Product Bundles DevCraft All Telerik .NET tools and Kendo UI JavaScript components in one package. Now enhanced with: NEW : Design Kits for Figma

Blazor Basics: Accessing Browser Storage in Blazor Web Applications

' title=

Today we will learn how to write to and read from the local and session storage in a Blazor Server web application.

In modern web development, storing transient user-specific data is a common task. When the whole web application runs in the browser, we can benefit from storing data inside the browser instead of going through a request/response cycle to store the data on the server.

Examples are items in a shopping cart, the sort order of a table, storing form input before submitting, the specified currency or language or the arrangement of dashboard elements.

The Benefits of Using the Browser Storage

First of all, why should we care about storing data on the client in the browser storage instead of transmitting it to the server and storing it in the database?

There are a few important benefits to client-side storage that, depending on the use case, have more or less impact on the decision of where to store the information.

  • Performance: As stated in the introduction, since we do not have the HTTP request back and forth to the server, there is no network latency and the overall server load is reduced.
  • Reduced server cost: With less server load, we can scale the application more effectively and it requires less cost to serve the same number of users the more server-side requests we turn into client-side storage access.
  • Reduced bandwidth usage: In the case of an application primarily used from mobile devices, you save money on the user’s mobile phone data plans or bandwidth usage by storing data on the client.
  • Data Security: In case we want to store information tied to a specific user, we need to be careful when storing the data on the server. We need to help protect it against attacks. When we store the information on the client side, it is mostly the problem of the user. Consider the search history of an online shop where a certain product category reveals information about a person’s personal health state.

The first reason on the list is the most common. However, sometimes it’s a combination of more than one reason to use client-side storage instead of writing the information to the database.

Local Storage vs. Session Storage

Before we implement a solution, we need to understand the terms local storage and session storage. Modern browsers provide these two types of built-in browser storage.

The local storage persists data even after the browser (tab) is closed. As the name suggests, it stores it on the local machine and offers up to 10 MB of space for your data per domain. Multiple browser tabs or windows with the same domain share the same local storage.

The session storage persists data for a single session and browser tab. Multiple browser tabs have their respective session storage. Like the local storage, its size is limited to about 10 MB.

The restrictions and characteristics make local storage an ideal solution for storing settings, user preferences and column order information.

The session storage is ideal for storing items in a shopping cart or other information that should be deleted when the user leaves the website.

Writing and Data to and from the Session Storage

Let’s say we want to store a random number in the browser’s session storage.

Consider the following Blazor page:

We have a button with an associated Generate method that is called when the user presses the button.

The Generate method uses an object of type Random and an object of the ProtectedSessionStorage type from the Microsoft.AspNetCore.Components.Server.ProtectedBrowserStorage namespace.

The API to write data to the browser’s session storage is simple. We use the asynchronous SetAsync method and provide a key and a value as its arguments.

Reading data from the browser’s session storage is a two-step process . First, we use the asynchronous GetAsync method and provide a type to specify the result type.

Next, we check the Success property and access the value using the Value property.

The code looks the same when using the local storage instead of the session storage. The only difference is that instead of injecting an instance of the ProtectedSessionStorage , you would use the ProtectedLocalStorage .

Hint: To make this simplified example work, we need to disable prerendering . The reason is that we cannot execute JavaScript during prerendering because there is no websites in the browser yet, and we need to call the JavaScript API to access the browser’s storage APIs.

To disable prerendering, we need to configure the HeadOutlet and the Routes components in the App.razor files accordingly:

Handle Prerendering in Blazor Server

The solution shown in the previous chapter works, but doesn’t support prerendering .

In this chapter, we will implement a solution that correctly handles prerendering , allowing us to use one of the key benefits of using Blazor Server.

With prerendering enabled, our component code looks slightly different:

The Generate method looks the same. However, the component template now has a condition and checks for the IsConnected property to decide whether to render loading information or the button.

The most significant changes happen in the component’s code.

First, we move the code that accesses the session store when loading the component from the OnInitializedAsync to the OnAfterRenderAsync lifecycle method.

The reason is that the OnAfterRenderAsync method is triggered when the page has been rendered and provides an argument about whether it is the first component render cycle.

We then check for the firstRender argument and set the IsConnected property to true, which triggers the button rendering in the component code. In this case, we need to call the StateHasChanged method to trigger a rerender of the page.

With those small changes applied, we introduced some nesting to our code, but on the upside, we now correctly handle Blazor Server prerendering.

ASP.NET Core Protected Browser Storage

When using types from the Microsoft.AspNetCore.Components.Server.ProtectedBrowserStorage namespace, such as the ProtectedLocalStorage and the ProtectedSessionStorage , we use a protected storage, as the name implies.

ASP.NET Core Protected Browser Storage uses data protection to encrypt the information stored in the browser. It limits the access to the information to the application and prevents outside access, such as using developer tools and manipulating the data.

When running the example code discussed previously in this article, the browser storage looks like this:

Browser dev tools showing the session storage content with a single key/value pair with the key 'luckyNumber' and an encrypted value.

As you can see, the key is stored in plain text. It makes total sense since we identify the data by its key. However, the value is encrypted, and the random number that we store is stored in the session storage as a string with 134 characters.

Gotchas When Working with Browser Storage

There are a few pitfalls lurking when working with browser storage.

  • Similar to storing data on the server, the browser APIs for accessing the browser storage are asynchronous. This introduces some complexity to the web application.
  • Related to the first pitfall, we cannot access the browser storage during Blazor prerendering because prerendering happens on the server. Therefore, a website that can access browser APIs doesn’t yet exist.

And, of course, we need to consider the data size limitation when deciding whether to store the information client-side or send it to the server.

Browser Storage in Blazor WebAssembly

The example shown in this article uses Blazor Server.

The ASP.NET Core Protected Browser Storage API is a server-only API and, therefore, can only be used for Blazor Server .

When it comes to Blazor WebAssembly, there are two solutions:

  • You can manually implement a wrapper around the JavaScript interop to access the browser’s session storage and local storage APIs.
  • You use one of the existing third-party open-source Blazor libraries, such as LocalStorage from Blazored .

Writing data client-side instead of server-side has different advantages based on the use case. The most important are  reduced server load and better responsiveness (no network latency).

Using the ASP.NET Core Protected Browser Storage , implementing a solution for Blazor Server is simple. When done properly, it also supports Blazor Server prerendering.

For Blazor WebAssembly , we need to implement JavaScript Interop wrapper or rely on a third-party solution to access the browser’s storage API.

You can access the code used in this example on GitHub .

If you want to learn more about Blazor development, you can watch my free Blazor Crash Course on YouTube . And stay tuned to the Telerik blog for more Blazor Basics .

' title=

Claudio Bernasconi

Claudio Bernasconi is a passionate software engineer and content creator writing articles and running a .NET developer YouTube channel . He has more than 10 years of experience as a .NET developer and loves sharing his knowledge about Blazor and other .NET topics with the community.

Related Posts

Blazor basics: blazor render modes in .net 8, blazor basics: styling blazor components with css, blazor basics: dealing with complex state scenarios in blazor, all articles.

  • ASP.NET Core
  • ASP.NET MVC
  • ASP.NET AJAX
  • Blazor Desktop/.NET MAUI
  • Design Systems
  • Document Processing
  • Accessibility

essay about web browser

Latest Stories in Your Inbox

Subscribe to be the first to get our expert-written articles and tutorials for developers!

All fields are required

Loading animation

Progress collects the Personal Information set out in our Privacy Policy and the Supplemental Privacy notice for residents of California and other US States and uses it for the purposes stated in that policy.

You can also ask us not to share your Personal Information to third parties here: Do Not Sell or Share My Info

By submitting this form, I understand and acknowledge my data will be processed in accordance with Progress' Privacy Policy .

I agree to receive email communications from Progress Software or its Partners , containing information about Progress Software’s products. I understand I may opt out from marketing communication at any time here or through the opt out option placed in the e-mail communication received.

By submitting this form, you understand and agree that your personal data will be processed by Progress Software or its Partners as described in our Privacy Policy . You may opt out from marketing communication at any time here or through the opt out option placed in the e-mail communication sent by us or our Partners.

We see that you have already chosen to receive marketing materials from us. If you wish to change this at any time you may do so by clicking here .

Thank you for your continued interest in Progress. Based on either your previous activity on our websites or our ongoing relationship, we will keep you updated on our products, solutions, services, company news and events. If you decide that you want to be removed from our mailing lists at any time, you can change your contact preferences by clicking here .

IMAGES

  1. What are browsers and how do they work?

    essay about web browser

  2. ⇉What: Web Browser and Mobile Digital Devices Essay Example

    essay about web browser

  3. Microsoft in the Web Browser Business

    essay about web browser

  4. Identify Different Browsers Essay Example

    essay about web browser

  5. my Assignment, on browsers

    essay about web browser

  6. 15 Internet Explorer AN Overview

    essay about web browser

VIDEO

  1. Most Popular Web Browser in Europe (reupload)

  2. Should We Abandon The Internet?

  3. Why Set Writing Goals?

  4. How was this not in the browser before???

  5. Select The Address Bar in a Web Browser #shorts #short #shortvideo #viral #trending

  6. What's The Best Web Browser to Use in 2024? #shorts

COMMENTS

  1. Web browser

    A web browser is a computer program application for reading pages of the World Wide Web.Since the late 1990s, most personal computers and mobile phones and other mobile devices have a browser.. Web browsers are used to find and look at websites on the Internet.The first web browser was created in 1990. Many web browsers are available for free.

  2. Web browser

    A web browser displaying a web pageA web browser is an application for accessing websites.When a user requests a web page from a particular website, the browser retrieves its files from a web server and then displays the page on the user's screen. Browsers are used on a range of devices, including desktops, laptops, tablets, and smartphones.In 2020, an estimated 4.9 billion people have used a ...

  3. What is a Browser and How do they work?

    A web browser is a software that enables users to access and view content on the World Wide Web. Its primary function is to locate and retrieve web pages, images, videos, documents, and other files from servers and display them on the user's device. For instance, imagine you want to visit a website.

  4. What is a Web Browser?

    Web Browser Definition: A software application used to access information on the World Wide Web is called a Web Browser. When a user requests some information, the web browser fetches the data from a web server and then displays the webpage on the user's screen. It is also important to know in detail about what a web browser is for candidates ...

  5. What is a web browser?

    A web browser takes you anywhere on the internet. It retrieves information from other parts of the web and displays it on your desktop or mobile device. The information is transferred using the Hypertext Transfer Protocol, which defines how text, images and video are transmitted on the web.

  6. What is a Web Browser? Types and Examples You Need to Know

    Brave. Brave is a popular alternative web browser that strives to reshape the web economy from the ground up. The browser blocks web ads by default, and it introduces an innovative way for websites to monetize users' attention. It rewards users for browsing by offering them their own company-made cryptocurrency.

  7. A short history of the Web

    Berners-Lee's original Web browser running on NeXT computers showed his vision and had many of the features of current Web browsers. In addition, it included the ability to modify pages from directly inside the browser - the first Web editing capability. This screenshot shows the browser running on a NeXT computer in 1993.

  8. The most important features of all major browsers

    One of the world's most popular web browsers. It was developed by the Mozilla foundation as a free and open-source web browser. The usage of this browser peaked in 2009 with 32.21% usage, according to Net Market Share, but that figure has since dropped as a result of competition from other browsers. Its usage stands at 7.11% as of September 2020.

  9. Privacy and Security Comparison of Web Browsers: A Review

    In this paper, we have done a survey of different research papers related to web browser privacy and security and found that all the web browsers make use of safe browsing services to protect the user from phishing attacks. But this method has few privacy concerns. Similarly, for chrome update extension, it also contacts chrome update API every ...

  10. How do web browsers work?

    1. - When you enter a website's address (in the form of the Uniform Resource Locator, or URL) into your browser's address bar or when you click a link, you set in motion a sequence of ...

  11. How the web works

    Computers connected to the internet are called clients and servers.A simplified diagram of how they interact might look like this: Clients are the typical web user's internet-connected devices (for example, your computer connected to your Wi-Fi, or your phone connected to your mobile network) and web-accessing software available on those devices (usually a web browser like Firefox or Chrome).

  12. How browsers work

    Over a few years, she reviewed all the published data about browser internals and spent a lot of time reading web browser source code. She wrote: As a web developer, learning the internals of browser operations helps you make better decisions and know the justifications behind development best practices. While this is a rather lengthy document ...

  13. What are Browsers?

    Open Document. Browsers: Browsers allows user to view the web pages on the computer or other devices. It is a software application that request web page from a remote server, download them and then show them onto your computer screen. User needs to request browser what website or specific webpage to be viewed. The browser address bar is one way ...

  14. What is a Web Browser?

    A Web Browser is a software used to view websites over the internet. Some commonly used browsers are Microsoft Edge, Google Chrome, Opera, and Mozilla Firefox. To understand how to use a browser we will consider Chrome browser as an example to show various things that can be done in the browser. Here are some of the most used tasks we do using the

  15. Essays.se: WEB BROWSER

    Essays about: "web browser" Showing result 1 - 5 of 215 essays containing the words web browser. 1. WebAssembly Performance Analysis: A Comparative Study of C++ and Rust Implementations ... Abstract : Background: In the contemporary digital landscape, web browsers have evolved from mere text renderers to sophisticated platforms supporting ...

  16. The Importance of Web Browser Security and Privacy

    Web browser settings can include anything from session history, cookies. autofill (of passwords, addresses, and payment methods), and much more. For the purpose of this research when web browser settings are mentioned we are referring to cookies and site data, browsing history, add-ons

  17. Comparative study of modern web browsers based on their ...

    Web browser is a software application for retrieving, presenting and traversing information resources on the World Wide Web. At present various desktop web browsers are available such as Netscape ...

  18. Comparative study of modern web browsers based on their performance and

    Internet is the most revolutionary invention of 20th century. It has contracted the world and made the lives of the people lucid and simple. Web browsers act as medium to access internet features and help us browse different websites for our daily purpose making most difficult tasks easier. In this paper, we will talk about advancement in field of web browser. Since it plays major role in ...

  19. Advantages and Disadvantages of Internet Essay

    People in different fields like offices, schools, colleges, hospitals etc., use their electronic devices like laptops, computers, tablets, cell phones etc., to make their work simple and fast. The internet has also made access to information easier. We can learn about the whole universe with just a single click by using the internet.

  20. Quantifying the web browser ecosystem

    Introduction. Web browsers have become a major component of the routine human-computer interaction, with some operating systems based entirely on browsers (e.g., ChromeOS by Google []).Browser extensions, also known as addons, are computer programs that (as the name suggests) extend, improve, and personalize browser capabilities.More than 750 million addons were downloaded and installed by ...

  21. A Performance Comparative on Most Popular Internet Web Browsers

    Using Web Browsers In This Study Mozilla Firefox: Mozilla Firefox is a free, cross-platform, graphical web browser created by the Mozilla Foundation and many volunteers. It was first known as "Phoenix" and briefly as "Mozilla Firebird." There are both positive and negative assessments of Mozilla Firefox's features, which set it apart from ...

  22. The modern browser is under attack: Here's how to protect it

    The modern web browser has undergone a profound transformation in recent years, becoming an indispensable tool in today's digital age. It facilitates online communication and provides ...

  23. Web Browser Case Studies Samples For Students

    14 samples of this type. During studying in college, you will surely have to compose a bunch of Case Studies on Web Browser. Lucky you if linking words together and organizing them into relevant text comes easy to you; if it's not the case, you can save the day by finding an already written Web Browser Case Study example and using it as a model ...

  24. Asked ChatGPT "Can you write an essay about web browsers but ...

    The nuances of web browser design are not important to my life in any way. I can't believe I just wasted half an hour of my life writing this pointless essay. I could've been doing so many other better things. But nope, had to write a stupid essay about web browsers in the melodramatic tone of a depressed teenager. This assignment is so dumb.

  25. Internet Browsers Are Getting a Makeover for the Workplace

    Commercial web browsers weren't built for business. New enterprise browsers aim to provide the security controls and user experience needed for work.

  26. Lack of PS Portal Web Browser Continues to Annoy Players

    The lack of a PS Portal web browser continues to vex players nearly eight months after the remote player launched. The issue keeps cropping up from time to time as players use workarounds to ...

  27. Microsoft Edge Developer

    Microsoft Edge is built on Chromium and provides the best-in-class extension and web compatibility. Learn how to begin and get your extensions onto the Edge Add-ons website. ... Debug and automate the browser using powerful tools for web developers. Learn more WebView2. Embed web content (HTML, CSS, JavaScript) in your native applications ...

  28. Blazor Basics: Accessing Browser Storage in Blazor Web Apps

    Modern browsers provide these two types of built-in browser storage. The local storage persists data even after the browser (tab) is closed. As the name suggests, it stores it on the local machine and offers up to 10 MB of space for your data per domain. Multiple browser tabs or windows with the same domain share the same local storage.

  29. Saxophone sideline leads to essay encore

    A fellow saxophonist in the band suggested Able Seaman Storey participate in the essay competition to stay mentally engaged and divert focus from her injury. "While being injured as a musician is always stressful, it became very clear that being in the military with career stability, free medical and empathetic colleagues is the best way to ...

  30. Illustrator on the web (beta) overview

    Whether a novice or a seasoned pro, Illustrator on the web (beta) lets you effortlessly craft illustrations, logos, and graphics, making your artistic imagination come alive. Seamlessly accessible from a desktop or laptop browser, it's perfect for designers on the go.