COMPUTER SCIENCE CAFÉ
  • WORKBOOKS
  • BLOCKY GAMES
  • GCSE
    • CAMBRIDGE GCSE
  • IB
  • A LEVEL
  • LEARN TO CODE
  • ROBOTICS ENGINEERING
  • MORE
    • CLASS PROJECTS
    • Classroom Discussions
    • Useful Links
    • SUBSCRIBE
    • ABOUT US
    • CONTACT US
    • PRIVACY POLICY
  • WORKBOOKS
  • BLOCKY GAMES
  • GCSE
    • CAMBRIDGE GCSE
  • IB
  • A LEVEL
  • LEARN TO CODE
  • ROBOTICS ENGINEERING
  • MORE
    • CLASS PROJECTS
    • Classroom Discussions
    • Useful Links
    • SUBSCRIBE
    • ABOUT US
    • CONTACT US
    • PRIVACY POLICY

WEB SCIENCE | DISTRIBUTED APPROACHES TO THE WEB

Topics from the International Baccalaureate (IB) 2014 Computer Science Guide. 
ON THIS PAGE
SECTION 1 | KEY TERMINOLOGY
SECTION 2 | COMPARISION OF FEATURES

SECTION 3 | INTEROPERABILITY AND OPEN STANDARDS 
SECTION 4 | HARDWARE USED BY DISTIBUTED NETWORKS 
SECTION 5 | GREATER DECENTRALISATION OF THE WEB
SECTION 6 | LOSSY AND LOSSLESS COMPRESSION 
SECTION 7 | DECOMPRESSION SOFTWARE
ALSO IN THIS SECTION
CREATING THE WEB PART 1
CREATING THE WEB PART 2​
SEARCHING THE WEB
DISTRIBUTED APPROACHES TO THE WEB
THE EVOLVING WEB
ANALYSING THE WEB
THE INTELLIGENT WEB

​NETWORK COMPONENTS
XML AND XMLT
PHP PRINCIPLES
JAVASCRIPT PRINCIPLES

REVISION CARDS
ANSWERS

Picture
SECTION 1 | KEY TERMINOLOGY
  • Bandwidth | The maximum rate of data transfer across a given path. It determines the amount of data that can be sent over a network or internet connection in a fixed amount of time.
  • Centralized Network | A network structure where a central node or server provides resources to multiple clients and manages network traffic.
  • Compression | The process of reducing the size of a file or data set. It is crucial for efficient storage and faster transmission of data.
  • Data Corruption | The unintentional modification or destruction of data due to hardware failure, software bugs, or other anomalies during data processing, storage, or transmission.
  • Data Sovereignty | The concept that digital data is subject to the laws and governance structures of the country in which it is located.
  • Decentralisation | The distribution of functions and powers away from a central location or authority. In computing, it refers to the dispersion of storage, processing, and networking across multiple locations.
  • Decompression | The process of restoring compressed data back to its original form.
  • Distributed Network | A network model where processing power and data are spread over multiple nodes, rather than centralised.
  • Huffman Encoding | A popular method of lossless data compression that uses variable-length codes to represent symbols, with shorter codes assigned to more frequent symbols.
  • Internet of Things (IoT) | A network of physical objects ('things') embedded with sensors, software, and other technologies for the purpose of connecting and exchanging data with other devices and systems over the internet.
  • Lossless Compression | A data compression technique in which the original data can be perfectly reconstructed from the compressed data.
  • Lossy Compression | A data compression method that discards some data to reduce size, potentially leading to a loss of quality. This is typically used for audio, video, and images.
  • Network Interface Card (NIC) | A hardware component that connects a computer to a network.
  • Peer-to-Peer (P2P) Network | A decentralised network where each participant (peer) shares a part of their resources directly with other peers without the need for centralized coordination.
  • Resilience | The ability of a system or network to withstand and recover from failures or disruptions.
  • Router | A networking device that forwards data packets between computer networks.
  • Run Length Encoding | A simple form of lossless data compression in which sequences of the same data value (runs) are stored as a single data value and count.
  • Server | A computer or system that provides resources, data, services, or programs to other computers, known as clients, over a network.
  • Switch | A networking device that connects devices together on a computer network, using packet switching to receive, process, and forward data to the destination device.
  • Ubiquitous Computing | A concept in software engineering and computer science where computing is made to appear everywhere and anywhere via the integration of computing into everyday objects.
SECTION 2 | COMPARISION OF FEATURES
In the realm of computer science, understanding the distinct features of various computing paradigms is crucial. This section delves into the comparison of four significant areas: Mobile Computing, Ubiquitous Computing, Peer-to-Peer (P2P) Networks, and Grid Computing. Each of these paradigms has unique characteristics and applications, shaping the way we interact with technology.

MOBILE COMUPUTING
Mobile computing refers to the use of portable computing devices in conjunction with mobile technology that enables users to access data and information from wherever they are.

Key Features
  • Portability | Devices are lightweight and can be operated while on the move.
  • Connectivity | Offers various forms of wireless connectivity (e.g., Wi-Fi, Bluetooth, cellular networks).
  • Battery-Powered | Operates on battery, making power management a crucial aspect.
  • Location Sensitivity | Capable of utilizing GPS and other technologies to offer location-aware services.

UBIQUITOUS COMPUTING
Ubiquitous computing, also known as pervasive computing, refers to the integration of computing capabilities into everyday objects to make them effectively invisible to the user.

Key Features
  • Invisibility | Seamlessly integrates into the user's environment.
  • Context-Awareness | Ability to sense, adapt to, and respond to the context of use.
  • Interconnectivity | High level of connectivity among devices, often wirelessly.
  • Ease of Use | Requires minimal or no direct user input.

PEER-TO-PEER (P2P) NETWORKS
P2P networks are decentralized networks where each participant (peer) shares a part of their resources (such as processing power, disk storage, or network bandwidth) directly with other peers.

Key Features
  • Decentralization | No central server; every peer is both a client and a server.
  • Scalability | Easily scales with the number of participants.
  • Resource Sharing | Direct sharing of resources among peers.
  • Resilience | Resistant to censorship and central points of failure.

GRID COMPUTING
Grid computing involves combining computer resources from multiple locations to reach a common goal, focusing on complex problem-solving or resource-intensive tasks.

Key Features
  • Resource Pooling | Utilizes a large pool of resources (CPU cycles, storage space) from various locations.
  • High Performance | Aimed at solving large-scale, complex computational problems.
  • Distributed Computing Model | Operates across many separate computers connected by a network.
  • Task Scheduling | Efficient scheduling and management of tasks across the grid.
​
Feature
Mobile Computing
Ubiquitous Computing
​P2P Networks
Grid Computing
Primary Focus
​Portability and accessibility
​Integration into everyday life
Resource sharing and decentralisation
​High-performance computation
Connectivity
High, with reliance on wireless technologies
​Highly interconnected, often invisible
​Direct peer-to-peer connections
​High-speed, often wired connections
Resource Usage
​Limited by device capabilities
​Efficient, distributed across many devices
​Utilizes shared resources among peers
​High, pooled from multiple sources
User Interaction
​Direct and constant
​Minimal or invisible
​Varies, can be high in setup and maintenance
​Minimal, mostly automated
Scalability
​Limited by mobile hardware
​Highly scalable and adaptive
​Highly scalable
​Highly scalable, but requires infrastructure
Typical Applications
Personal communication, mobile office, location-based services
​Smart homes, wearable technology, IoT
​File sharing, distributed computing, collaborative work
​Scientific research, complex simulations, data analysis
​The comparison of mobile computing, ubiquitous computing, P2P networks, and grid computing highlights the diversity in computing paradigms. Each has its strengths and ideal use cases, from the personal, portable nature of mobile computing to the large-scale, resource-intensive tasks suited for grid computing. Understanding these differences is key to selecting the right technology for a given application.
SECTION 3 | INTEROPERABILITY AND OPEN STANDARDS
In the digital world, the concepts of interoperability and open standards are fundamental yet distinct. This section aims to clarify these terms, highlighting their differences and their respective roles in the realm of technology.

INTEROPERABILITY
Interoperability refers to the ability of different systems, devices, applications, or products to connect and communicate in a coordinated way, without extra effort from the user.

Key Aspects
  • Communication | Involves the exchange of information between systems.
  • Compatibility | Systems must be compatible at various levels (data, network, application) to function together effectively.
  • Cooperation | Different systems work together to achieve a common goal, enhancing functionality or user experience.

Importance
  • Facilitates Integration | Allows for seamless integration of diverse technologies.
  • Enhances User Experience | Users can operate across different platforms and devices smoothly.
  • Promotes Innovation | Encourages the development of versatile and adaptable technologies.

OPEN STANDARDS
Open standards refer to publicly available specifications or criteria for systems that are accessible to any individual or organization. They are established to ensure consistency and compatibility across various platforms and technologies.

Key Aspects
  • Accessibility | Available to anyone, often free of charge or at a nominal cost.
  • Consistency | Provide consistent guidelines that manufacturers and developers can follow.
  • Neutrality | Developed and maintained through a collaborative, consensus-driven process, often by recognized standards organizations.

Importance
  • Ensures Compatibility | Helps in creating compatible and interoperable products and services.
  • Drives Innovation | Open standards encourage competition and innovation within the industry.
  • Reduces Fragmentation | Helps in avoiding market fragmentation due to proprietary technologies.
ASPECT
INTEROPERABILITY
OPEN STANDARS
Definition
​Ability of systems to work together seamlessly
​Publicly available specifications for systems
Focus
​Functionality and cooperation between different systems
​Guidelines and criteria for system design
Key Requirement
​Compatibility and communication between systems
​Adherence to publicly available specifications
Development
​Achieved through design and implementation
​Established by standards organisations or consensus
Outcome
​Seamless user experience across different systems
​Consistency and compatibility in system design
Role in Technology
​Facilitates practical integration of diverse systems
​Provides a foundation for developing interoperable systems
​While interoperability and open standards are closely related, they serve different purposes in the technological landscape. Interoperability is about the practical ability of systems to work together, whereas open standards provide the guidelines and specifications that enable this interoperability. Understanding both concepts is crucial for developers, manufacturers, and users who seek to create or use cohesive, efficient, and versatile technological ecosystems.
SECTION 4 | HARDWARE USED BY DISTRIBUTED NETWORKS
Distributed networks, characterised by their decentralised nature, rely on a diverse range of hardware components to function effectively. The growth of mobile technology has significantly influenced the evolution and capabilities of these networks. This section explores the various hardware elements integral to distributed networks and discusses how advancements in mobile technology have facilitated their expansion.

CORE HARDWARE COMPONENTS IN DISTRIBUTED NETWORKS
SERVERS
  • Function | Central storage and processing units in a network.
  • Types | Includes web servers, application servers, and database servers.
  • Role in Distributed Networks | Host services and resources, distribute tasks, and manage network traffic.

ROUTERS AND SWITCHES
  • Function | Facilitate data transfer within and between networks.
  • Types | Wired and wireless routers, managed and unmanaged switches.
  • Role in Distributed Networks | Direct data packets, manage network traffic, and maintain network integrity.

NETWORK INTERFACE CARDS (NICs)
  • Function | Enable devices to connect to a network.
  • Types | Wired (Ethernet) and wireless (Wi-Fi, Bluetooth) NICs.
  • Role in Distributed Networks | Provide the physical interface for network connectivity in devices.

DATA STORAGE SYSTEMS
  • Function | Store data in a centralised or distributed manner.
  • Types | Hard drives, solid-state drives, network-attached storage (NAS), and storage area networks (SAN).
  • Role in Distributed Networks | Store and manage data, ensuring availability and redundancy.

CLIENT DEVICES
  • Function |End-user devices for accessing network resources.
  • Types | Desktops, laptops, smartphones, tablets.
  • Role in Distributed Networks | Interface for users to access and interact with networked applications and services.

IMPACT OF MOBILE TECHNOLOGY ON DISTRIBUTED NETWORKS
In the dynamic landscape of technology, the advent and evolution of mobile technology have been pivotal in shaping distributed networks. With its rapid advancements and widespread adoption, has significantly influenced and transformed the architecture, functionality, and capabilities of distributed networks. From enhancing connectivity and enabling the proliferation of smart devices to fostering the integration of the Internet of Things (IoT) and synergising with cloud computing, mobile technology has not only expanded the horizons of distributed networks but also redefined the way users and devices interact within these networks. 

ENHANCED CONNECTIVITY
  • Development | Advancements in cellular technologies (4G, 5G).
  • Impact | Improved speed and reliability of mobile internet, enabling more devices to connect to distributed networks from any location.

PROLIFERATION OF SMART DEVICES
  • Development | Growth in the variety and capabilities of smart devices (smartphones, wearables).
  • Impact | Increased number of endpoints in distributed networks, leading to more data generation and higher network engagement.

IOT INTEGRATION
  • Development |Integration of Internet of Things (IoT) with mobile technology.
  • Impact | Expansion of network boundaries, incorporating a multitude of sensors and smart devices into distributed networks.

CLOUD COMPUTING SYNERGY
  • Development | Mobile access to cloud-based services and applications.
  • Impact | Enhanced flexibility and scalability in distributed networks, allowing users to access powerful computing resources and data storage remotely.

EDGE COMPUTING
  • Development | Processing data closer to the source (edge of the network).
  • Impact | Reduced latency and bandwidth use in distributed networks, improving efficiency for mobile devices.

​The hardware used in distributed networks forms the backbone of our interconnected world, with each component playing a vital role in ensuring seamless communication and data exchange. The rapid advancements in mobile technology have not only expanded the reach and capabilities of these networks but have also driven innovation in how we interact with and benefit from distributed computing environments. As mobile technology continues to evolve, it will undoubtedly bring further enhancements and opportunities in the realm of distributed networks.
SECTION 5 | GREATER DECENTRALISATION OF THE WEB
The evolution of the internet towards a more decentralised model is a significant shift, influenced largely by the development and implementation of distributed systems. This section explores how distributed systems are acting as catalysts in driving the web towards greater decentralisation, and how this shift is fostering increased international-mindedness.

THE ROLE OF DISTRIBUTED SYSTEMS IN DECENTRALISING THE WEB
Breaking Down Centralised Control
  • Traditional Model | Initially, the web was more centralised, with data and services controlled by a limited number of providers.
  • Change Through Distributed Systems | Distributed systems distribute control across multiple nodes, reducing the concentration of power and control in the hands of a few.

Enhancing Data Sovereignty and Privacy
  • Data Control | In a centralised system, user data is often controlled by single entities, raising privacy concerns.
  • Distributed Approach | Distributed systems allow data to be stored and managed across various nodes, enhancing user control over their own data and privacy.

Resilience and Reduced Censorship
  • Centralised Vulnerabilities | Centralised networks are more susceptible to outages and censorship.
  • Distributed Networks | By nature, distributed systems are more resilient to failures and censorship, as the removal or malfunction of one node doesn’t incapacitate the entire network.

DECENTRALISATION AND INCREASED INTERNATIONAL-MINDEDNESS
Cross-Border Collaboration
  • Global Participation | Decentralised web fosters a more inclusive environment where individuals and organizations worldwide can contribute and collaborate.
  • Cultural Exchange |This collaboration leads to a greater exchange of cultural and ideological perspectives, enhancing international understanding and relationships.

Democratisation of Information
  • Equal Access | Decentralisation facilitates more equitable access to information, breaking down geographical and socio-economic barriers.
  • Diverse Perspectives | It allows for a multitude of voices and perspectives to be heard, fostering a more diverse and global discourse.

Fostering Global Communities
  • Community Building | Decentralised systems enable the formation of global communities around shared interests, causes, or goals, irrespective of physical borders.
  • Empowerment Through Technology | These communities leverage technology to initiate global movements, share knowledge, and drive international cooperation.

The movement towards a more decentralised web, significantly driven by distributed systems, is not just a technological evolution but also a cultural and social one. This shift is playing a crucial role in increasing international-mindedness, breaking down barriers, and fostering a more inclusive, resilient, and collaborative global community. As we continue to embrace and develop these technologies, we pave the way for a more interconnected and diverse world, where information and power are more evenly distributed.
SECTION 6 | LOSSY AND LOSSLESS COMPRESSION
Many files such as images, videos and even text documents can take up large amounts of memory, meaning an increased need for storage space and slower transfer speeds do to file size, this has led to the need for data compression. Data Compression is the process of making files require less memory to store.  When people talk about 'file size' they are usually referring to the memory required to store the file and not the physical size of the document or file.

There are two main methods of file compression, lossy and lossless. Each type of file compression has its benefits and disadvantages. To compress a file software such as Win Zip or is used Archive Utility Zip are used, files zipped by one software brand should be able to be uncompressed by other brands. Different file types use different methods of compression, for example compression an image will use a different algorithm to compressing a text document.

Some reasons for compression are:
✓ Compression makes the file size smaller so less space is needed to store the file
✓ Compression makes the file size smaller so files transfer faster over a network such as the internet
✓ Compression makes the file size smaller which helps with file streaming

LOSSY COMPRESSION
The key element of Lossy compression is that the file will lose quality when it is compressed. The lose of quality is not important for many files and in many cases we do not even notice the reduction in quality.

Some key points on Lossy compression are:
✓ Lossy compression reduces the file size by removing some of the data, because of this an exact match of the original data cannot be recreated. Quality is lost.
✓ Lossy compression uses an algorithm that looks to remove detail that is barely noticeable, for example if pixels next to each other in an image are almost the same colour then the Lossy algorithm will give them the same value to reduce the bytes needed to store the detail.
✓ Lossy compression is often used on files such as images and sound files such as MP3s and JPGs
✓ Lossy compression is often not a good option for files such as text documents
​✓ Lossy compression can make files sizes smaller that is possible with Lossless compression

LOSSLESS COMPRESSION
The key element to lossless compression is the no quality is lost during the process of compression. Lossless compression is used when it is important to maintain the original quality.

Some key points of Lossless compression are
✓ Lossless compression will not remove any quality from the file, the compressed version will be the same as the original when uncompressed.
✓ Lossless compression uses an algorithm that looks for repeat data, this can be groups and categorised and a token be given for where each group will be used in the reconstruction
✓ Lossless compression is often used on files such as text files and images such as DOCXs, GIFs and PNGs
​✓ Lossless compression is often not a good option for audio files and high colour images
✓ Lossless compression is more limited than Lossy compression with how small the file size can be made
VIDEO BY: CRASH COURSE
COMPRESSION METHODS

Run Length Encoding (RLE)
​Run Length Encoding is a method of compression that looks for repeating patterns and then encodes them into one item of data of a specified length.
Picture
Take the top row of the image, it has 8 white pixels and then the second row 1 white pixel, 2 red pixels, 2 white and so on. An uncompressed representation of the image would represent each pixel individually for example the binary for the row, if this was an 8 bit image then the top row has 8 pixels with 8 bits used to represent the colour of each pixel meaning it take 8pixels x 8bits = 64bit to represent the 8 white pixels. With run length encoding we can simply encode this as 8 white pixels in a row 8W this would mean 8 bits would be used to represent the length of the pattern and 8 bits to represent the colour meaning using run-length encoding the top row could be compressed from 64 bits to just 16 bits.

Huffman Encoding
Huffman Encoding is often used in Lossless compression and it uses a greedy algorithm to create an encoding system that uses a binary tree principle to allocate each item a unique code, ensuring that the most frequently occurring item gets the smallest code.
"learning is a journey, enjoy" ​
Doing a frequency analysis on the quote above we can see that the most frequently occurring letter is 'n' and also the 'space', which appear 4 times each followed be 'e' appearing 3 times. Continuing this frequency analysis we can put each letter in a chart in order of frequency.

​We can then use a tree to put to illustrate and allocate each letter a binary code, the letters that occur the most frequent will go at the top of the tree and the code allocated for these letters will require less Bits to encode than those occurring further down the tree.
Picture
Looking at the allocation of encoding in this method we can see that each letter in the quote was allocated the following binary representations, this can then be used as the key to recovering the compressed file with zero loss to the original quality.
SECTION 7 | DECOMPRESSION SOFTWARE
Decompression software plays a vital role in this process, especially when dealing with large data sets. This section evaluates the use of decompression software in the transfer of information, considering its advantages, limitations, and overall impact on data management and transmission.

Decompression Software is a tool used to restore data compressed by compression algorithms to its original state. It enables the efficient transfer of large files over the internet or other networks by reducing file size.

ADVANTAGES
  • Reduced Bandwidth Usage |Compressed files require less bandwidth to transfer, making the process faster and more efficient, especially in bandwidth-limited environments.
  • Faster Transfer Speeds | Smaller file sizes lead to quicker transmission times, a crucial factor in time-sensitive applications.
  • Cost-Effective | Lower data transfer volumes can reduce costs associated with data transmission, particularly important for businesses and large-scale operations.
  • Storage Efficiency | Compressed files take up less storage space, both in transit and when archived.

LIMITATIONS
  • Processing Overhead | Decompression requires processing power. On systems with limited resources, this can be a bottleneck.
  • Potential for Data Corruption | Errors during compression or decompression can lead to data corruption, although modern algorithms are designed to minimize this risk.
  • Compatibility Issues | The need for compatible decompression software on the receiving end can be a limitation, especially in diverse technological environments.
  • Time Consumption |The process of decompressing large files can be time-consuming, offsetting some of the time saved in transmission.

Decompression software, in conjunction with compression tools, facilitates the movement of large amounts of data, a key factor in today's data-driven world. Fields like scientific research, multimedia, and cloud computing, where large datasets are common, greatly benefit from efficient data transfer methods enabled by decompression software. Decompression software must be robust against security threats, as compressed files can be used to conceal malicious content.

The use of decompression software in the transfer of information is a double-edged sword, offering significant benefits in terms of efficiency and cost-effectiveness, while also presenting challenges like processing overhead and potential data corruption. However, the advantages largely outweigh the limitations, especially in an era where the rapid and efficient transfer of large data sets is becoming increasingly critical. As technology evolves, further advancements in compression and decompression algorithms are expected to enhance these processes, making them more efficient, reliable, and user-friendly.

Picture
SUGGESTIONS
We would love to hear from you
SUBSCRIBE 
To enjoy more benefits
We hope you find this site useful. If you notice any errors or would like to contribute material then please contact us.