Cloud multimedia applications, service and devices, multimedia delivery is expected to become the major traffic of Internet which will keep increasing rapidly. In order to serve such large scale multimedia applications, more and more service providers store their video assets in the cloud and delivery streaming to their consumers cross cloud, for example, YouTube. Along with the growth of users and the amount of media content constantly being produced, traditional cloud-based storage has two drawbacks. First, a lot of servers and storages devices are needed, which could easily be the performance bottleneck in the whole system. Second, to provide differential classes of services in the large-scale situation, system tends to need many additional devices.
This article proposes a robust, scalable, highly available and service level provisioning cloud-based storage system designed specifically for distributing multimedia content. The proposed system contains a proven Adaptive Quality of Service(AQoS) algorithm in order to provide differential service levels. The system can also be used flexibly in large, medium and small-scale environment. In addition, some algorithms are also developed to increase overall system performance and fault tolerance. Get more projects ideas from the industry experts Implementations and experiment results show that the proposed system can meet the requirements both in the laboratory and a practical commercial service environment.
Cloud computing is a fast, growing and emerging technology that could provide elasticity, scalability, ubiquitous availability, and cost-effectiveness. There have been numerous studies about the definition and categories of cloud computing in fact, the “cloud” is more often used as “Metaphoring the Internet” where “Cloud-based” means network-centric. More and more new topics are being studied from prior research fields which combine the concept of cloud. Multimedia cloud (or media cloud) aims to leverage cloud computing technologies for multimedia applications, services and systems.
Researchers have proposed various kinds of media or multimedia cloud from different orientations. From references multimedia cloud is proposed as an emerging computing paradigm that can effectively process multimedia applications and provide novel multimedia service for consumers. Moreover, according to the multimedia-related traffic has been predicted to account for around 90% of the global Internet Protocol (IP) traffic that will reach 1.3 ZB per year in 2016. Find the best projects titles for your final year projects Therefore, an important research issue is how to deliver such large amounts of multimedia content which is stored in and crossed over the cloud.
However, one key challenge is effectively
transferring the multimedia on the clouds while providing quality of service
(QoS) provision. In particular, QoS provision needs to be considered in
cloud-based storage system that is responsible to store and fetch data for
others’ applications and services in the cloud computing systems. Regarding the
delivery of multimedia from/to the cloud, the most challenging work is how the
cloud storage can provide distributed parallel accessing of media asset for
millions of users with different service levels. Therefore, this paper proposes
a QoS-provisioning cloud storage system, which is particularly aiming at
distributed parallel accessing of media asset for millions of users with different
1.3 LITRATURE SURVEY
AUTHOR AND PUBLICATION: C.-H. R. Lin, H.-J. Liao, K.-Y. Tung, Y.-C. Lin, and S.-L. Wu, “NETWORK TRAFFIC ANALYSIS WITH CLOUD PLATFORM,” J. Internet Technol., vol. 13, no. 6, pp. 953–961, Dec. 2012.
The existing Internet traffic passive measurement solutions are mainly based on a technical route of downloading traffic dataset and analysis tool from the corresponding distribution site, and then carrying out a local place research based on off-line traffic analysis. However, the issues of massive traffic datasets acquisition and analysis as well as measurement achievement reuse and sharing still need a further consideration. The paper applies the ever up surging cloud computing paradigm to network traffic passive measurement field in order to address the issues. On the basis of inducing the drawbacks of the conventional technical route, we proposed a novel cloud pattern of passive measurement work and designed an architecture of cloud-pattern based network traffic analysis platform. Furthermore, using the authentic traffic collected at a CERNET backbone (10 Gbps), we implemented a prototype system of the architecture, called IP Trace Analysis System, or IPTAS for short. Click for more info Combined with IPTAS, the paper elaborates the critical implementations of the architecture and verifies its feasibility and flexibility through IPTAS application instances.
AUTHOR AND PUBLICATION: K.-H. Kim, S.-Ju Lee, and P. Congdon, “ON CLOUD-CENTRIC NETWORK ARCHITECTURE FOR MULTI-DIMENSIONAL MOBILITY,” ACM SIGCOMM Comput. Commun. Rev., vol. 42, no. 4, pp. 1–6, Oct. 2012.
deployment of wireless networks, maintaining seamless mobile connectivity
within a set of local devices and to the remote cloud is still challenging. The
crux of this challenge stems from the simultaneous interplay of multiple
dimensions of a user’s mobility – users frequently move between multiple access
networks, mobile devices and unique personas. We identify new trends and challenges
in providing rich mobile connectivity to mobile users. We then propose a novel
Cloud-centric Architecture for Rich Mobile Experience Networking, called
Carmen. Carmen is a distributed system that manages the mobile connectivity of
a set of devices belonging to a particular individual, which we call the mobile
personal grid (MPG). Carmen enables the MPG to efficiently collect context from
a mobile user and coordinate key system resources across the MPG and cloud. We
present new design principles and functional components of Carmen. In addition,
we show our system prototype of Carmen’s resource monitoring infrastructure to
demonstrate its feasibility and benefits in improving the mobile user’s
AUTHOR AND PUBLICATION: C.-F. Lai, H. Wang, H.-C. Chao and G. Nan, “A NETWORK AND DEVICE AWARE QOS APPROACH FOR CLOUD-BASED MOBILE STREAMING,” IEEE Trans. Multimedia, vol. 15, no. 4, pp. 747–757, Jun. 2013.
Cloud multimedia services provide an efficient, flexible, and scalable data processing method and offer a solution for the user demands of high quality and diversified multimedia. As intelligent mobile phones and wireless networks become more and more popular, network services for users are no longer limited to the home. Multimedia information can be obtained easily using mobile devices, allowing users to enjoy ubiquitous network services. Considering the limited bandwidth available for mobile streaming and different device requirements, this study presented a network and device-aware Quality of Service (QoS) approach that provides multimedia data suitable for a terminal unit environment via interactive mobile streaming services, further considering the overall network environment and adjusting the interactive transmission frequency and the dynamic multimedia transcoding, to avoid the waste of bandwidth and terminal power. Finally, this study realized a prototype of this architecture to validate the feasibility of the proposed method. Check here According to the experiment, this method could provide efficient self-adaptive multimedia streaming services for varying bandwidth environments.
AUTHOR AND PUBLICATION: J. Jiang, Y. Wu, X. Huang, G. Yang, and W. Zheng, “ONLINE VIDEO PLAYING ON SMARTPHONES: A CONTEXT-AWARE APPROACH BASED ON CLOUD COMPUTING,” J. Internet Technol., vol. 11, no. 6, pp. 821–828, Nov. 2010.
Recently, Cloud-based Mobile
Augmentation (CMA) approaches have gained remarkable ground from academia and industry.
CMA is the state-of-the-art mobile augmentation model that employs
resource-rich clouds to increase, enhance, and optimize computing capabilities
of mobile devices aiming at execution of resource-intensive mobile
applications. Augmented mobile devices envision to perform extensive computations
and to store big data beyond their intrinsic capabilities with least footprint
and vulnerability. Researchers utilize varied cloud based computing resources
(e.g., distant clouds and nearby mobile nodes) to meet various computing
requirements of mobile users. However, employing cloud-based computing
resources is not a straightforward panacea. Comprehending critical factors (e.g.,
current state of mobile client and remote resources) that impact on
augmentation process and optimum selection of cloud based resource types are
some challenges that hinder CMA adaptability. This paper comprehensively
surveys the mobile augmentation domain and presents taxonomy of CMA approaches.
The objectives of this study is to highlight the effects of remote resources on
the quality and reliability of augmentation processes and discuss the
challenges and opportunities of employing varied cloud-based resources in
augmenting mobile devices. We present augmentation definition, motivation, and
taxonomy of augmentation types, including traditional and cloud-based. We
critically analyze the state-of-the-art CMA approaches and classify them into
four groups of distant fixed, proximate fixed, proximate mobile, and hybrid to
present a taxonomy. Vital decision making and performance limitation factors
that influence on the adoption of CMA approaches are introduced and an
exemplary decision making flowchart for future CMA approaches are presented.
Im- pacts of CMA approaches on mobile computing is discussed and open
challenges are presented as the future research directions.
2.0 SYSTEM ANALYSIS
2.1 EXISTING SYSTEM:
Previous work on storage resource management can be classified into two classes largely. One is, guaranteeing each client’s storage QoS requirements set by a system administrator. These systems such as the required response time objectives by regulating the rate of other clients’ workload incoming into the storage system uses Earliest Deadline First (EDF) to meet the response time objectives but it is impossible when an unexpected workload burst occurs by other clients.
Chameleon uses leaky-bucket with
feedback control but leaky-bucket system does not use the storage system efficiently
because it is also not work-conserving. Triage adopts a control theory to
predict the system performance and correspondingly adjust its system model for
performance isolation and differentiation. Its system model is not sensitive to
the performance dynamics perceived by concurrent clients due to different
physical data position.
- Another consideration is that not only the component but also the used packages in the system are scalable. Any more powerful third-party packages could replace some parts of current system but may not scale well in the large-scale situation.
- First, a lot of servers and storages devices are needed, which could easily be the performance bottleneck in the whole system.
- Second, to provide differential classes of services in the large-scale situation, system tends to need many additional devices.
2.2 PROPOSED SYSTEM:
We proposed system contains a proven Adaptive Quality of Service (AQoS) algorithm in order to provide differential service levels. The system can also be used flexibly in large, medium and small-scale environment. In addition, some algorithms are also developed to increase overall system performance and fault tolerance. Implementations and experiment results show that the proposed system can meet the requirements both in the laboratory and a practical commercial service environment.
We design and deploy the storage system with QoS provision for a multiple class-aware multimedia delivery service. The design goals and requirements for a storage system should include that the system can be deployed in cloud and also can be used flexibly in large, medium and small size environments and has features of scalability, considerable fault tolerance, security and other basic requirements.
Our proposed system applies the algorithm on the targeted storage which has been decided in the previous step. This algorithm is applied on the targeted storage with its own statistical data. each storage has its own statistical data for calculation. These constants are used in the algorithm are set in the configuration file and can be changed at runtime. Storage I/O benchmark tools (iozone) are used to perform the R/W pattern which is close to our services to decide the constant values.
Since storage I/O throughput is very
dynamic and the system only needs rough values. If new storage is the same as
the storage used now, the constant value could adopt from the current value. If
size is the only difference between the new and current storages, modification
of some constants is needed to fit into the new storage size.
Multiclass Service awareness is an import requirement of a cloud system for multimedia application. The users in the system are divided into server class users. The basic requirement is that the users in the system are divided into two levels: high class and low class. The high class users can always use the service and low class users need a mechanism to determine whether the services can be used.
Scalability is another goal that the system must be able to scale up and scale down in the different size environment. Developing a system architecture which can be used in small and large circumstances simultaneously is a challenge. The design idea is that the each component in the system needs to be loosely coupled and every component could also be scaled up and scaled down by itself.
Fault Tolerance achieves this goal, a distributed system must be designed and it is not affected when some servers are malfunctioned. When a server failure does not affect the whole system, it means that the server can be removed easily. The proposed system also tries to provide easy server installation as well. In the system, different kinds of metadata are required to be held redundantly by the corresponding technologies.
Security is a serious issue in
cloud-based services today. Especially in providing differential service system,
security vulnerabilities can easily lead to the corruption of the entire
system. In the system proposed in this paper, some hash algorithms are used to
provide verification in the communication protocol, and reject malicious
requests in addition, system has development and production environment due to
security considerations, as there must be error messages of different detail
levels among different environments.
2.3 HARDWARE & SOFTWARE REQUIREMENTS:
2.3.1 HARDWARE REQUIREMENT:
v Processor – Pentium –IV
- Speed –
- RAM – 256 MB (min)
- Hard Disk – 20 GB
- Floppy Drive – 1.44 MB
- Key Board – Standard Windows Keyboard
- Mouse – Two or Three Button Mouse
- Monitor – SVGA
2.3.2 SOFTWARE REQUIREMENTS:
- Operating System : Windows XP or Win7
- Front End : Microsoft Visual Studio 2008
- Back End : MSSQL Server
- Server : ASP .NET Web Server
- Script : C# Script
- Document : MS-Office 2007
3.0 SYSTEM DESIGN:
Data Flow Diagram / Use Case Diagram / Flow Diagram:
- The DFD is also called as bubble chart. It is a simple graphical formalism that can be used to represent a system in terms of the input data to the system, various processing carried out on these data, and the output data is generated by the system
- The data flow diagram (DFD) is one of the most important modeling tools. It is used to model the system components. These components are the system process, the data used by the process, an external entity that interacts with the system and the information flows in the system.
- DFD shows how the information moves through the system and how it is modified by a series of transformations. It is a graphical technique that depicts information flow and the transformations that are applied as data moves from input to output.
- DFD is also known as bubble chart. A DFD may be used to represent a system at any level of abstraction. DFD may be partitioned into levels that represent increasing information flow and functional detail.
SOURCE OR DESTINATION OF DATA:
External sources or destinations, which may be people or organizations or other entities
Here the data referenced by a process is stored and retrieved.
People, procedures or devices that produce data’s in the physical component is not identified.
Data moves in a specific direction from an origin to a destination. The data flow is a “packet” of data.
There are several common modeling rules when creating DFDs:
- All processes must have at least one data flow in and one data flow out.
- All processes should modify the incoming data, producing new forms of outgoing data.
- Each data store must be involved with at least one data flow.
- Each external entity must be involved with at least one data flow.
- A data flow must be attached to at least one process.
3.1 ARCHITECTURE DIAGRAM
3.2 DATAFLOW DIAGRAM
3.2 USE CASE DIAGRAM: