Skip to content

A SCALABLE AND RELIABLE MATCHING SERVICE FOR CONTENT-BASED PUBLISH/SUBSCRIBE SYSTEMS

ABSTRACT:

Characterized by the increasing arrival rate of live content, the emergency applications pose a great challenge: how to disseminate large-scale live content to interested users in a scalable and reliable manner. The publish/subscribe (pub/sub) model is widely used for data dissemination because of its capacity of seamlessly expanding the system to massive size. However, most event matching services of existing pub/sub systems either lead to low matching throughput when matching a large number of skewed subscriptions, or interrupt dissemination when a large number of servers fail. The cloud computing provides great opportunities for the requirements of complex computing and reliable communication.

In this paper, we propose SREM, a scalable and reliable event matching service for content-based pub/sub systems in cloud computing environment. To achieve low routing latency and reliable links among servers, we propose a distributed overlay Skip Cloud to organize servers of SREM. Click Here Through a hybrid space partitioning technique HPartition, large-scale skewed subscriptions are mapped into multiple subspaces, which ensures high matching throughput and provides multiple candidate servers for each event.

Moreover, a series of dynamics maintenance mechanisms are extensively studied. To evaluate the performance of SREM, 64 servers are deployed and millions of live content items are tested in a Cloud Stack testbed. Under various parameter settings, the experimental results demonstrate that the traffic overhead of routing events in SkipCloud is at least 60 percent smaller than in Chord overlay, the matching rate in SREM is at least 3.7 times and at most 40.4 times larger than the single-dimensional partitioning technique of BlueDove. Besides, SREM enables the event loss rate to drop back to 0 in tens of seconds even if a large number of servers fail simultaneously.

INTRODUCTION

Because of the importance in helping users to make realtime decisions, data dissemination has become dramatically significant in many large-scale emergency applications, such as earthquake monitoring, disaster weather warning and status update in social networks. Recently, data dissemination in these emergency applications presents a number of fresh trends. One is the rapid growth of live content. For instance, Facebook users publish over 600,000 pieces of content and Twitter users send over 100,000 tweets on average per minute. The other is the highly dynamic network environment. For instance, the measurement studies indicate that most users’ sessions in social networks only last several minutes. In emergency scenarios, the sudden disasters like earthquake or bad weather may lead to the failure of a large number of users instantaneously.

These characteristics require the data dissemination system to be scalable and reliable. Firstly, the system must be scalable to support the large amount of live content. The key is to offer a scalable event matching service to filter out irrelevant users. Otherwise, the content may have to traverse a large number of uninterested users before they reach interested users. Secondly, with the dynamic network environment, it’s quite necessary to provide reliable schemes to keep continuous data dissemination capacity. Otherwise, the system interruption may cause the live content becomes obsolete content. Driven by these requirements, publish/subscribe (pub/ sub) pattern is widely used to disseminate data due to its flexibility, scalability, and efficient support of complex event processing. In pub/sub systems (pub/subs), a receiver (subscriber) registers its interest in the form of a subscription. Events are published by senders to the pub/ sub system.

The system matches events against subscriptions and disseminates them to interested subscribers.

In traditional data dissemination applications, the live content are generated by publishers at a low speed, which makes many pub/subs adopt the multi-hop routing techniques to disseminate events. A large body of broker-based pub/subs forward events and subscriptions through organizing nodes into diverse distributed overlays, such as treebased design cluster-based design and DHT-based design. However, the multihop routing techniques in these broker-based systems lead to a low matching throughput, which is inadequate to apply to current high arrival rate of live content.

Recently, cloud computing provides great opportunities for the applications of complex computing and high speed communication where the servers are connected by high speed networks, and have powerful computing and storage capacities. A number of pub/sub services based on the cloud computing environment have been proposed, such as Move BlueDove and SEMAS. However, most of them can not completely meet the requirements of both scalability and reliability when matching large-scale live content under highly dynamic environments.

This mainly stems from the following facts:

1) Most of them are inappropriate to the matching of live content with high data dimensionality due to the limitation of their subscription space partitioning techniques, which bring either low matching throughput or high memory overhead.

2) These systems adopt the one-hop lookup technique among servers to reduce routing latency. In spite of its high efficiency, it requires each dispatching server to have the same view of matching servers. Otherwise, the subscriptions or events may be assigned to the wrong matching server, which brings the availability problem in the face of current joining or crash of matching servers. A number of schemes can be used to keep the consistent view, like periodically sending heartbeat messages to dispatching servers or exchanging messages among matching servers. However, these extra schemes may bring a large traffic overhead or the interruption of event matching service.

LITRATURE SURVEY

RELIABLE AND HIGHLY AVAILABLE DISTRIBUTED PUBLISH/SUBSCRIBE SERVICE

PUBLICATION: Proc. 28th IEEE Int. Symp. Reliable Distrib. Syst., 2009, pp. 41–50.

AUTHORS: R. S. Kazemzadeh and H.-A Jacobsen

EXPLANATION:

This paper develops reliable distributed publish/subscribe algorithms with service availability in the face of concurrent crash failure of up to delta brokers. The reliability of service in our context refers to per-source in-order and exactly-once delivery of publications to matching subscribers. To handle failures, brokers maintain data structures that enable them to reconnect the topology and compute new forwarding paths on the fly. This enables fast reaction to failures and improves the system’s availability. Moreover, we present a recovery procedure that recovering brokers execute in order to re-enter the system, and synchronize their routing information.

BUILDING A RELIABLE AND HIGH-PERFORMANCE CONTENT-BASED PUBLISH/SUBSCRIBE SYSTEM

PUBLICATION: J. Parallel Distrib. Comput., vol. 73, no. 4, pp. 371–382, 2013.

AUTHORS: Y. Zhao and J. Wu

EXPLANATION:

Provisioning reliability in a high-performance content-based publish/subscribe system is a challenging problem. The inherent complexity of content-based routing makes message loss detection and recovery, and network state recovery extremely complicated. Existing proposals either try to reduce the complexity of handling failures in a traditional network architecture, which only partially address the problem, or rely on robust network architectures that can gracefully tolerate failures, but perform less efficiently than the traditional architectures. In this paper, we present a hybrid network architecture for reliable and high-performance content-based publish/subscribe. Two overlay networks, a high-performance one with moderate fault tolerance and a highly-robust one with sufficient performance, work together to guarantee the performance of normal operations and reliability in the presence of failures. Our design exploits the fact that, in a high-performance content-based publish/subscribe system, subscriptions are broadcast to all brokers, to facilitate efficient backup routing when failures occur, which incurs a minimal overhead. Per-hop reliability is used to gracefully detect and recover lost messages that are caused by transit errors. Two backup routing methods based on DHT routing are proposed. Extensive simulation experiments are conducted. The results demonstrate the superior performance of our system compared to other state-of-the-art proposals.

SCALABLE AND ELASTIC EVENT MATCHING FOR ATTRIBUTE-BASED PUBLISH/SUBSCRIBE SYSTEMS

PUBLICATION: Future Gener. Comput. Syst., vol. 36, pp. 102–119, 2013.

AUTHORS: X. Ma, Y. Wang, Q. Qiu, W. Sun, and X. Pei

EXPLANATION:

Due to the sudden change of the arrival live content rate and the skewness of the large-scale subscriptions, the rapid growth of emergency applications presents a new challenge to the current publish/subscribe systems: providing a scalable and elastic event matching service. However, most existing event matching services cannot adapt to the sudden change of the arrival live content rate, and generate a non-uniform distribution of load on the servers because of the skewness of the large-scale subscriptions. To this end, we propose SEMAS, a scalable and elastic event matching service for attribute-based pub/sub systems in the cloud computing environment. SEMAS uses one-hop lookup overlay to reduce the routing latency. Through ahierarchical multi-attribute space partition technique, SEMAS adaptively partitions the skewed subscriptions and maps them into balanced clusters to achieve high matching throughput. The performance-aware detection scheme in SEMAS adaptively adjusts the scale of servers according to the churn of workloads, leading to high performance–price ratio. A prototype system on an OpenStack-based platform demonstrates that SEMAS has a linear increasing matching capacity as the number of servers and the partitioning granularity increase. It is able to elastically adjust the scale of servers and tolerate a large number of server failures with low latency and traffic overhead. Compared with existing cloud based pub/sub systems, SEMAS achieves higher throughput in various workloads.

SYSTEM ANALYSIS

EXISTING SYSTEM:

Characterized by the increasing arrival rate of live content, the emergency applications pose a great challenge: how to disseminate large-scale live content to interested users in a scalable and reliable manner. The publish/subscribe (pub/sub) model is widely used for data dissemination because of its capacity of seamlessly expanding the system to massive size. However, most event matching services of existing pub/sub systems either lead to low matching throughput when matching a large number of skewed subscriptions, or interrupt dissemination when a large number of servers fail.

However, most existing event matching services cannot adapt to the sudden change of the arrival live content rate, and generate a non-uniform distribution of load on the servers because of the skewness of the large-scale subscriptions. To this end SEMAS, a scalable and elastic event matching service for attribute-based pub/sub systems in the cloud computing environment. SEMAS uses one-hop lookup overlay to reduce the routing latency. Through ahierarchical multi-attribute space partition technique, SEMAS adaptively partitions the skewed subscriptions and maps them into balanced clusters to achieve high matching throughput.

The performance-aware detection scheme in SEMAS adaptively adjusts the scale of servers according to the churn of workloads, leading to high performance–price ratio. A prototype system on an OpenStack-based platform demonstrates that SEMAS has a linear increasing matching capacity as the number of servers and the partitioning granularity increase. It is able to elastically adjust the scale of servers and tolerate a large number of server failures with low latency and traffic overhead.

DISADVANTAGES:

Publish/Subscribe (pub/sub) is a commonly used asynchronous communication pattern among application components. Senders and receivers of messages are decoupled from each other and interact with an intermediary— a pub/sub system.

A receiver registers its interest in certain kinds of messages with the pub/sub system in the form of a subscription. Messages are published by senders to the pub/sub system. The system matches messages (i.e., publications) to subscriptions and delivers messages to interested subscribers using a notification mechanism.

There are several ways for subscriptions to specify messages of interest. In its simplest form messages are associated with topic strings and subscriptions are defined as patterns of the topic string. A more expressive form is attribute-based pub/sub where messages are further annotated with various attributes.

Subscriptions are expressed as predicates on the message topic and attributes. An even more general form is content based pub/sub where subscriptions can be arbitrary Boolean functions on the entire content of messages (e.g., XML documents), limited to attributes1.

Attribute based pub/sub strikes a balance between the simplicity and performance of topic-based pub/sub and the expressiveness of content-based pub/sub. Many large-scale and loosely coupled applications including stock quote distribution, network management, and environmental monitoring can be structured around a pub/sub messaging paradigm.

PROPOSED SYSTEM:

We propose a scalable and reliable matching service for content-based pub/sub service in cloud computing environments, called SREM. Specifically, we mainly focus on two problems: one is how to organize servers in the cloud computing environment to achieve scalable and reliable routing. Best Android and PHP Projects for the final year college students The other is how to manage subscriptions and events to achieve parallel matching among these servers. Generally speaking, we provide the following contributions:

We propose a distributed overlay protocol, called SkipCloud, to organize servers in the cloud computing environment. SkipCloud enables subscriptions and events to be forwarded among brokers in a scalable and reliable manner. Also it is easy to implement and maintain.

  • To achieve scalable and reliable event matching among multiple servers, we propose a hybrid multidimensional space partitioning technique, called HPartition. It allows similar subscriptions to be divided into the same server and provides multiple candidate matching servers for each event. Moreover, it adaptively alleviates hot spots and keeps workload balance among all servers.
  • We implement extensive experiments based on a CloudStack testbed to verify the performance of SREM under various parameter settings.
  • In order to take advantage of multiple distributed brokers, SREM divides the entire content space among the top clusters of SkipCloud, so that each top cluster only handles a subset of the entire space and searches a small number of candidate subscriptions. SREM employs a hybrid multidimensional space partitioning technique, called HPartition, to achieve scalable and reliable event matching.

ADVANTAGES:

To achieve reliable connectivity and low routing latency, these brokers are connected through a distributed overlay, called SkipCloud. The entire content space is partitioned into disjoint subspaces, each of which is managed by a number of brokers. Subscriptions and events are dispatched to the subspaces that are overlapping with them through SkipCloud.

Since the pub/sub system needs to find all the matched subscribers, it requires each event to be matched in all datacenters, which leads to large traffic overhead with the increasing number of datacenters and the increasing arrival rate of live content.

Besides, it’s hard to achieve workload balance among the servers of all datacenters due to the various skewed distributions of users’ interests. Another question is that why we need a distributed overlay like SkipCloud to ensure reliable logical connectivity in datacenter environment where servers are more stable than the peers in P2P networks.

This is because as the number of servers increases in datacenters, the node failure becomes normal, but not rare exception. The node failure may lead to unreliable and inefficient routing among servers. To this end, we try to organize servers into SkipCloud to reduce the routing latency in a scalable and reliable manner. Find more info

HARDWARE & SOFTWARE REQUIREMENTS:

HARDWARE REQUIREMENT:

v    Processor                                 –    Pentium –IV

  • Speed       –    1 GHz
  • RAM       –    256 MB (min)
  • Hard Disk      –   20 GB
  • Floppy Drive       –    44 MB
  • Key Board      –    Standard Windows Keyboard
  • Mouse       –    Two or Three Button Mouse
  • Monitor      –    SVGA

SOFTWARE REQUIREMENTS:

  • Operating System        :           Windows XP or Win7
  • Front End       :           JAVA JDK 1.7
  • Back End :           MYSQL Server
  • Server :           Apache Tomact Server
  • Script :           JSP Script
  • Document :           MS-Office 2007
Tags: