Software Delivery Glossary
From thin clients to hypervisor, and from containers to kernels, software delivery is a supremely complex subject, rife with jargonistic terms and esoteric knowledge! Read definitions of all of software delivery's technical terms and jargon, and gain further insight into the technology behind virtualization solutions with Software2's comprehensive Software Delivery Glossary.
Ultimate guide to VDI
A complete guide to VDI in higher education, including the history of VDI as a solution, the benefits and challenges of VDI and alternative technologies.
Ultimate guide to application virtualization
The ultimate guide to application virtualization, from technical information to major solutions' benefits.
Guide to creating RFPs for any application delivery or virtualization solution tender
AppsAnywhere Sample Business Case
A comprehensive sample business case for AppsAnywhere in Universities and Colleges across the world, covering key benefits and return on investment.
Software delivery glossary
An extensive and comprehensive glossary of software delivery terms by Software2
Application layering is the technology and principle behind application virtualization; the process of dividing applications into discrete, base components, separate from the operating system they’re to be executed on in a way that allows them to still communicate with that operating system.
Application packaging is the process assembling a collection of files that make up the structure of an application and bundling them into a package/appset, targeted for automated deployment. These packages are tailored to meet the installation requirements of specific environments and corporate standards. This is done using a number of methods including, but not limited to:
- Configurable app events (which are set to trigger at various points of virtualization)
- Virtual isolation and the use of environmental variables within the registry to replace any hardcoded paths.
Application streaming is the method of delivery for virtualized applications. Only essential portions of applications are initially streamed to the end-user device with non-essentials, such as some context menus and functions not required for start-up being streamed on-demand.
BYOD, or 'Bring Your Own Device' is a end-user computing principle that enables users to access organization software, files, and systems on their own, non-managed machines. Allowing users to work on their own machine, whether it be a desktop computer, laptop, tablet or smartphone, can significantly improve productivity and makes mobile working possible.
A resource manager that helps to manage pools of connections to resources such as virtual/remote desktops or databases. Connection brokers allow for quick reuse of a connection without having to set up new connections for each use of it.
A container provides system-level virtualization by abstracting the “user space” or “namespace”. One of the key differences between a container and a traditional virtual machine is that a container uses and shares its host system’s kernel. The container host system kernel is shared with all containers that run on the container host. Each container provides its own isolated user space/namespace so multiple containers can be run on a single host.
A container does not provide any form of hardware-based virtualization like a traditional virtual machine does. Each VM also runs its own OS kernel and is not shared with the Hypervisor host or other virtual machines. As containers share the host's system kernel they are bound by the host OS. So as an example you cannot run a Windows container on a Linux host. Containers are mostly used for stateless applications, such as web-based applications, where no configuration or user data is stored within the container.
A building, room or space meant to contain computing resources and components, whether they be telecommunications hardware, different types of servers or security equipment. Data centers may be on-site in universities or hosted by third-party companies. They exist in four main categories: Enterprise data centres, Managed services data centres, Colocation data centers, Cloud data centers.
A proprietary technology of VMware, ESX (Elastic Sky X) servers exist for a type of server virtualization. Described as an 'enterprise-class, type-1 hypervisor, it is used for deploying virtual machines without being installed on an operating system. The essential components of operating systems are included.
A fat client (also called heavy, rich or thick client) is a computer (clients), in client–server architecture or networks, that typically provides rich functionality independent of the central server.
A Hypervisor creates and runs virtual machines and may come in the form of software, hardware, firmware or combinations thereof. When a hypervisor runs multiple, virtual machines on a computer, that computer is referred to as a host machine and the individual virtual machines are referred to as guest machines.
A snapshot or exact replica of the contents of a digital storage device, whether that be a storage device, an install package or an entire machine.
The overall size of a disk image in Megabytes or Gigabytes that represent the complete contents of the image including the Operating System, Drivers, Software, and Preferences.
Any type of data, in this case, packaged application files, that is separated from the OS it is to be executed on by a virtual structure, but is still visible to that OS as if it were locally installed.
Any type of data, in this case, packaged application files, that is separated from the OS it is to be executed on by a virtual structure and is only visible within the virtualization application. Also referred to as 'sandboxed'.
A kernel is the portion of system energy allocated to the running of its operating system. To protect the operating system and its stability from any software applications which may be performing unpredictably, and the kernel and the user space are always discretely isolated from one another.
Please see: 'User space'.
Non-persistent VDI delivers virtual desktops that are destroyed on log out. Settings and customized aspects of the virtual machine are lost between each session and users are provided with a fresh virtual machine each time VDI is used. Despite the obvious drawbacks of this, the benefits are that non-persistent VDI licenses may be cheaper and less server infrastructure is required. This can help to make VDI a more accessible and viable solution for higher education IT and can greatly reduce obstacles to BYOD.
PC over IP is a remote desktop protocol and the technology behind desktop and application virtualization. Very generally, processes are carried out on a cloud or network server and the virtualized app or desktop is pixel streamed to the endpoint.
Persistent VDI is a type of desktop virtualization in which the user's desktop 'persists' and is customizable in the same way a traditional desktop would be. While this is unquestionably a better user experience and more convenient method of delivering software, more server infrastructure is required and VDI licenses may be more expensive.
Pixel streaming is a method of distributing content to end-users remotely that keeps data transfer manageable, with any execution or processes happening server-side. Pixel streaming is often how virtual desktops are distributed to users.
A bare-bones computer that is optimized for remote connection with server-based computing environments. Most processing and execution is carried out server-side, with the end-hardware taking care of display and user input. Thin clients lack processing power as functions are not performed locally, which means this setup can be used to save money on end-hardware.
Unified Application Delivery is the ability to integrate multiple application delivery technologies and bring them together into once concise and easy to use service, delivered in a customizable, centralized app store.
Also referred to as 'Namespace'. User space refers to the system memory allocated, by a system, to executing and running software applications. User space is discernible from a system's kernel in that a kernel is dedicated to operating-system processes. The division and separation of these two areas of system memory ensure that stray or otherwise misperforming processes, such as crashes, do not affect the OS.