Expert Position Papers

Below are a few Cloud Computing Position Papers provided by RESERVOIR partners for the Cloudscape-II event organised by the OGF-Europe Industry Expert Group. The views expressed in these position papers are those of the author and do not necessarily reflect the views of the individual organisation or affiliates.


Philippe Massonet, CETIC

Cloud Computing – Benefits, Risks And Recommendations for Information Security

The European Network and Information Security Agency (ENISA) has published a study and report on “Cloud Computing Security Risk Assessment” with the support of a group of experts including representatives from enterprise and the EC-funded RESERVOIR project. RESERVOIR was represented by Philippe Massonet from CETIC, in the context of the Emerging and Future Risk Framework project, a risks assessment on cloud computing business model and technologies. The result is an in-depth and independent analysis that outlines some of the information security benefits and key security risks of cloud computing. The report also provides a set of practical recommendations. Two complementary reports are also available, a report on “Cloud Computing Information Assurance Framework”, and “An SME perspective on Cloud Computing”. The former describes a set of assurance criteria designed to assess the risk of adopting cloud services, to compare what different Cloud Provider offers, to obtain assurance from the selected cloud providers, and to reduce the assurance burden on cloud providers. The SME survey describes the results of a survey of the actual needs, requirements and expectations of SMEs for cloud computing services.

Security Benefits of Cloud Computing

Cloud computing has significant potential to improve security and resilience. Cloud security may also benefit from the economies of scale provided by cloud computing: all kinds of security measures are cheaper when implemented on a larger scale. Cloud security may be presented as a market differentiator, because customers can compare and select cloud providers based on the quality of protection that is provided. Cloud providers can hire highly qualified security personnel to develop and deploy a scalable security infrastructure. Furthermore, cloud providers can supply open and standardised interfaces to managed security services, thus providing clients with a high quality security. With rapid scaling of resources, cloud providers can dynamically scale defensive resources on demand to achieve a high level of resilience even when under attack. Cloud providers can provide audit, evidence gathering and forensic analysis services. Cloud providers can also bring more timely and effective updates of security patches. Hardening of customer virtual machines and application of security patches to customer virtual machines can be offered as a service. Audits and SLAs encourage better risk management practices to deal with the potential penalties for SLA breaches and the impact on reputation. Even though resource concentration is an attractive target for attackers, it has advantages of cheaper physical access control (per unit resource) and an easier and cheaper application of a comprehensive security policy.

Security Risks of Cloud Computing

In terms of policy and organisational risks, a number of issues emerge such as Software as a Service (SaaS), Platform as a Service (PaaS), and Infrastructure as a Service (IaaS) vendor lock-in, loss of governance because the cloud user cedes control to the cloud provider on a number of issues that may affect security, compliance challenges because the cloud provider may refuse audits, potential loss of business reputation due to resource sharing and co-tenant malicious activities, cloud service termination or failure, acquisition of a cloud provider and a shift in strategy which may put at risk some of the agreements on which cloud clients rely. In the context of service chains that are partially outsourced to cloud providers any failure could lead to cascading failures leading to important economic damage. In terms of technical risks the following have been identified: resource exhaustion due to over or under provisioning, isolation failures related to resource sharing and multi tenancy leading to breach of confidentiality, cloud provider malicious insider activity which may lead to breach of confidentiality, integrity and availability of data and services, management interface risk linked to remote access over the internet and browser vulnerabilities leading to unauthorized access, data in transit interception, data leakage on upload/download, insecure or ineffective deletion of data, distributed denial of service, economic denial of service, loss of encryption keys, malicious probes or port scanning leading to loss of confidentiality, integrity and availability of service and data, service engine compromised, conflicts between customer hardening procedures and cloud provider procedures. In terms of legal risks, the following have been analysed: subpoena and e-discovery due to seizing cloud shared resources containing data of many customers, managing customer data in multiple jurisdictions, data protection risks, and licensing risks. In addition to the above cloud specific risks, non cloud specific risks are also identified and analysed.

Conclusions

The security benefits and risks of cloud computing are described in more detail in the report “Cloud Computing Security Risk Assessment”. The complementary reports “Cloud Computing Information Assurance Framework” and “An SME perspective on Cloud Computing” are available on the ENISA web site.

Acknowledgements

We would like to thank the ENISA Team, especially Daniele Catteddu and Giles Hogben, who coordinated and edited the reports.

Thijs Metsch, Sun Microsystems

Clouds, Grids and even HPC have been buzzwords for concepts in science and industry over the past few years. A major disadvantage of buzzwords and the technologies behind them is that they tend to be presented as “the” solution for everything. Grids and HPC have come a long way and have a proven track-record within their specific communities. Cloud computing is the new player on the block. Again this invention (if you can call it that ) was unavoidable. It was mainly driven by business demands but also by demand from the community.

Now, again, a new technology is presented in the way that solves everything and will replace existing technologies. Many use cases perfectly fit in traditional HPC and Grid while others better fit in Clouds. Also new things can be done which were previously impossible. The important point is that HPC, Grids and Clouds can and must co-exist. Overall there is still work to be done for all of them. For example, communities are still asking for and demanding interoperability and portability. Clouds can easily address this with the help of Virtualisation technologies (even though virtualisation might not be a key component of the Cloud). Virtual workloads, irrespective of whether it is Infrastructure, Platform or Software as a Service (IaaS, PaaS, SaaS) can be moved, migrated and started on demand. The real focus becomes no longer the hosting and deployment of services but how-to provide services to the end-user, which hopefully offers a more user friendly services. The missing part is the interoperability between Cloud providers. The Open Cloud Computing Interface (OCCI) within OGF is aimed at addressing this and ensuring portability and interoperability.

Also Grids and HPC can make use of Cloud services. Existing technologies like Distributed Resource Managers can easily be provisioned to the end-user of Cloud Services. And this can be done by pushing a service into the cloud which accesses the Grid or HPC resources. These InterClouds, where private as well as public resources are combined for certain purposes and use cases, is one of the key features of the Cloud. Here the billing part plays a major role, where the end-user can now decide when to use which resource for a certain price. Decisions can then be made based on price, availability and guaranteed performance of Cloud services.

Maik Lindner, SAP Research

Enterprise IT and Cloud Computing

Cloud computing is still in its early stages and constantly undergoing changes as new vendors, offers, and services appear in the cloud market. The evolution of cloud computing models is driven by cloud providers bringing constantly new services to the ecosystem or revamping newer, more efficient services primarily triggered by the rapidly changing requirements of consumers. While cloud computing is predominantly adopted by start-ups or SMEs so far, wide-scale enterprise adoption of the cloud computing model is still in its early stages, with enterprises still carefully examining the various usage models where cloud computing can be deployed to support their business operations.

Enterprise IT

Typical components of enterprise IT components are Sales and Distributions (SD), banking and financials, customer relationship management (CRM) or supply chain management (SCM). These applications will face major technical and non-technical challenges to be deployed in Cloud environments. For instance, IT systems provide mission-critical functions and enterprises have clear security and privacy concerns. The classical transactional systems typically use a shared-everything architecture, while Cloud platforms mostly consist of shared-nothing commodity hardware.

An optimal adoption decision cannot be established for all individual cases, as the types of resources (infrastructure, storage, software) obtained from a cloud depend on the size of the organisation, understanding of IT impact on business, predictability of workloads, flexibility of existing IT landscape and available budget/resources for testing and piloting.

Nevertheless, opportunities arise while the highly complex enterprise applications are decomposed into simpler functional components, which are characterised and engineered accordingly.

Hybrid on-demand add-ons to legacy applications

At the moment the main adopters of cloud computing are small companies and startups which are not tied up to a legacy of IT investments. By contrast, the cloud concept is new and harder to adapt to for more mature enterprises. This model has yet to fully meet the criteria of enterprise IT, but it is getting there at an accelerated pace as a rich and vibrant ecosystem is being developed by start-up and now major IT vendors. One of the drawbacks of bringing large-scale business applications to the cloud is migration to the cloud architecture of existing applications. The expected average lifetime of an Enterprise Resource Planning (ERP) product is 15 years, which means that companies will need to face this aspect sooner or later as they try to evolve toward the new IT paradigm.

Applications that are at the centre of gravity of the enterprise are likely to remain on-premise for various reasons, such as security, criticality, and heavy integration with other applications. Therefore, enterprises should adopt a model that is a hybrid of on premise and on-demand models in order to fully leverage the benefits of the cloud computing paradigm while maintaining their current investment. Such a hybrid model should support transitions between on-premise and on-demand modes of operation for certain types of applications along with their data sets.

In order to avoid vendor lock-in and promote a “mix and match” of the applications, services and utilities, these applications should be interchangeable, and be cloudagnostic. Otherwise there will be no financial incentive for adoption.

Of particular interest in all categories are cloud-native applications. These are applications that leverage the wisdom of the crowds, rely on cross-enterprise collaboration by default, or provide added value because multiple tenants share the same cloud-based infrastructure. Another characteristic of cloud-native applications are the special data management features that arise because of the scale, distribution and the unstructured nature of the data on the cloud. Under such conditions, consistency has to be relaxed, relying on pre-defined data schemas is infeasible. To make cloud-native a reality, an approach that departs from the conventional Relational DataBase Management Systems (RDBMS) and separation of on-line transaction processing (OLTP) and on-line analytical processing (OLAP) is required.

A suite of core business applications as managed services can also be an attractive option, especially for small and medium companies. Despite considerable engineering challenges, leading software providers are already offering tailored business suite components as hosted services which enhance existing IT environments.

Ignacio Llorente, Complutense University of Madrid

Flexibility and Interoperability in “IaaS” Cloud Computing

Future enterprise data centres will look like private clouds supporting a flexible and agile execution of virtualised services, combining local with public cloud-based infrastructure to enable highly scalable hosting environments. The key component in these cloud architectures will be a cloud management system, also called cloud operating system (OS), responsible for the secure, efficient and scalable management of the cloud resources. Cloud OS is displacing “traditional” OS, which will be part of the application stack.

Flexibility in Cloud Operating Systems

A Cloud OS administers the complexity of a distributed infrastructure in the execution of virtualised service workloads. The Cloud OS manages a number of servers and hardware devices and their infrastructure services which make up a cloud system, giving the user the impression that they are interacting with a single infinite capacity and elastic cloud. In the same way that multi-threaded OS defines the thread as the unit of execution and the multi-threaded application as the management entity, supporting communication and synchronisation instruments; multi-tier Cloud OS define the VM as the basic execution unit and the multi-tier virtualised service (group of VMs) as the basic management entity, supporting different communication instruments and their auto-configuration at boot time. This concept helps to create scalable applications because you can add VMs as and when needed. Individual multi-tier applications are all isolated from each other, but individual VMs in the same application are not as they all may share a communication network and services as and when needed. OpenNebula, which is being enhanced in the RESERVOIR project, is an open source Cloud management toolkit that provides the Cloud OS functionality on a wide range of technologies.The main differentiation of OpenNebula is not its leading-edge functionality but its open, modular and extensible architecture that enables its seamless integration with any service and component in the ecosystem. The open architecture of OpenNebula provides the flexibility that many enterprise IT shops need for internal cloud adoption. Cloud computing is about integration, one solution does not fit all. Moreover, the right configuration and components in a Cloud architecture also depend on the execution requirements of the service workload.

Interoperability at the Cloud Management Level

The IEEE defines interoperability as “the ability of two or more systems or components to exchange information and to use the information that has been exchanged” and Wikipedia introduces interoperability as “the property referring to the ability of diverse systems and organisations to work together (inter-operate)“. Being the core component in any cloud solution, interoperability is crucial for the success of a cloud management system. We can compare the cloud OS with the kernel in “traditional” operating systems. The cloud OS represents the basic functions in a cloud and requires a well defined communication with underlying devices and interface to expose administration and user functionality.

At the cloud management level, interoperability means:

  • Modularity and flexibility to interface easily with any service or technology in the virtualisation and cloud ecosystem.
  • Standardisation to avoid vendor lock-in and to create a healthy community.
In fact interoperability should be evaluated from three different angles:
  1. Infrastructure User Perspective: Users, application developers, integrators and aggregators are requiring a standard interface for the management of virtual machines, network and storage. OCCI is a simple REST API for Infrastructure as a Service based Clouds that is being defined in the context of OGF. This interfaces represents the first standard specification for life-cycle management of virtualized resources. OpenNebula has been the first referent implementation of this open cloud interface, and also implement the Amazon EC2 API.
  2. Infrastructure Management Perspective: Administrators are requiring cloud OS to interface into existing infrastructure and management services, so fitting into any data centre. OpenNebula provides a flexible back-end that can be integrated with any service for virtualization, storage and networking.
  3. Infrastructure Federation Perspective: Administrators are requiring cloud OS to manage resources from partner and commercial clouds.

With high-end computing demands, cloud operating systems will continue to be a very active field of research and development. An open and flexible approach for cloud management ensures uptake and simplifies adaptation to different environments, being key for interoperability. The existence of an open and standard-based cloud management system like OpenNebula provides the foundation for building a complete cloud ecosystem, ensuring the new components and services in the ecosystem to have the widest possible market and user acceptability.