14
Core IEEE Projects (Division of Conceptz) #108,5th Main, 4th Cross, Hanumanth Nagar, Basavanagudi, Bangalore-50, Website: www.coreieeeprojects.com contact: 9535052050 IEEE 2011 Projects List/2011 IEEE Projects Title: A Policy Enforcing Mechanism for Trusted Ad Hoc Networks To ensure fair and secure communication in Mobile Ad hoc Networks (MANETs), the applications running in these networks must be regulated by proper communication policies. However, enforcing policies in MANETs is challenging because they lack the infrastructure and trusted entities encountered in traditional distributed systems. This paper presents the design and implementation of a policy enforcing mechanism based on trusted execution monitor built on top of the Trusted Platform Module. Under this mechanism, each application or protocol has an associated policy. Two instances of an application running on different nodes may engage in communication only if these nodes enforce the same set of policies for both the application and the underlying protocols used by the application. In this way, nodes can form trusted application centric networks. Before allowing a node to join such a network, Satem verifies its trustworthiness of enforcing the required set of policies. If any of them is compromised, Satem disconnects the node from the network. We demonstrate the correctness of our solution through security analysis, and its low overhead through performance evaluation of the applications. Title: A Query Formulation Language for the data web We present a query formulation language called MashQL in order to easily query and fuse structured data on the web. The main novelty of MashQL is that it allows people with limited IT-skills to explore and query one or multiple data sources without prior knowledge about the schema, structure, vocabulary, or any technical details of these sources. More importantly, to be robust and cover most cases in practice, we do not assume that a data source should have -an offline or inline- schema. This poses several language-design and performance complexities that we fundamentally tackle. To illustrate the query formulation power of MashQL, and without loss of generality, we chose the Data Web scenario. We also chose querying RDF, as it is the most primitive data model; hence, MashQL can be similarly used for querying relational databases and XML. We present two implementations of MashQL, an online mashup editor, and a Firefox add-on. The former illustrates how MashQL can be used to query and mash up the Data Web as simple as filtering and piping web feeds; and the Firefox addon illustrates using the browser as a web composer rather than only a navigator. To end, we evaluate MashQL on querying two datasets, DBLP and DBPedia, and show that our indexing techniques allow instant user-interaction.

Ieee final year student projects by core ieeeprojects.com

Embed Size (px)

DESCRIPTION

Core Projects is a division of Conceptz Corporate Training focused to deliver project. We are a diversified team of people working towards a common goal of providing the best project delivery for Final year BE M-TECH, BCA, BSc, MCA, MSc students. or in the industry to our clients. We Work closely with our project to understand their Business Process, give them the Best possible Delivery Model to minimize the Cost and maximize the ROI..

Citation preview

Page 1: Ieee final year  student projects by core ieeeprojects.com

Core IEEE Projects (Division of Conceptz)#108,5th Main, 4th Cross, Hanumanth Nagar, Basavanagudi, Bangalore-50, Website:www.coreieeeprojects.com contact: 9535052050IEEE 2011 Projects List/2011 IEEE Projects

Title: A Policy Enforcing Mechanism for Trusted Ad Hoc Networks

To ensure fair and secure communication in Mobile Ad hoc Networks (MANETs),the applications running in these networks must be regulated by propercommunication policies. However, enforcing policies in MANETs is challengingbecause they lack the infrastructure and trusted entities encountered intraditional distributed systems. This paper presents the design andimplementation of a policy enforcing mechanism based on trusted executionmonitor built on top of the Trusted Platform Module. Under this mechanism, eachapplication or protocol has an associated policy. Two instances of an applicationrunning on different nodes may engage in communicat ion only if these nodesenforce the same set of policies for both the application and the underlyingprotocols used by the application. In this way, nodes can form trusted applicationcentric networks. Before allowing a node to join such a network, Satem v erifies itstrustworthiness of enforcing the required set of policies. If any of them iscompromised, Satem disconnects the node from the network. We demonstrate thecorrectness of our solution through security analysis, and its low overheadthrough performance evaluation of the applications.

Title: A Query Formulation Language for the data web

We present a query formulation language called MashQL in order to easily queryand fuse structured data on the web. The main novelty of MashQL is that it allowspeople with limited IT-skills to explore and query one or multiple data sourceswithout prior knowledge about the schema, structure, vocabulary, or anytechnical details of these sources. More importantly, to be robust and cover mostcases in practice, we do not assume that a data source should have -an offline orinline- schema. This poses several language-design and performance complexitiesthat we fundamentally tackle. To illustrate the query formulation power ofMashQL, and without loss of generality, we chose the Data Web scenario. We alsochose querying RDF, as it is the most primitive data model; hence, MashQL can besimilarly used for querying relational databases and XML. We present twoimplementations of MashQL, an online mashup editor, and a Firefox add-on. Theformer illustrates how MashQL can be used to query and mash up the Data Webas simple as filtering and piping web feeds; and the Firefox addon illustratesusing the browser as a web composer rather than only a navigator. To end, weevaluate MashQL on querying two datasets, DBLP and DBPedia, and show thatour indexing techniques allow instant user-interaction.

Page 2: Ieee final year  student projects by core ieeeprojects.com

Core IEEE Projects (Division of Conceptz)#108,5th Main, 4th Cross, Hanumanth Nagar, Basavanagudi, Bangalore-50, Website:www.coreieeeprojects.com contact: 9535052050Title: A Privacy-Preserving Remote Data Integrity Checking Protocol with Data Dynamics and PublicVerifiability

To ensure fair and secure communication in Mobile Ad hoc Networks (MANETs),the applications running in these networks must be regulated by propercommunication policies. However, enforcing policies in MANETs is challengingbecause they lack the infrastructure and trusted entities encountered intraditional distributed systems. This paper presents the design andimplementation of a policy enforcing mechanism based on trusted executionmonitor built on top of the Trusted Platform Module. Under this mechanism, eachapplication or protocol has an associated policy. Two instances of an applicationrunning on different nodes may engage in communication only if these nodesenforce the same set of policies for both the application and the underlyingprotocols used by the application. In this way, nodes can form trusted applicationcentric networks. Before allowing a node to join such a network, Satem verifies itstrustworthiness of enforcing the required set of policies. If any of them iscompromised, Satem disconnects the node from the network. We demonstrate thecorrectness of our solution through security analysis, and its low overheadthrough performance evaluation of the applications.

Title: Adaptive Fault Tolerant QoS Control Algorithms for Maximizing

Data sensing and retrieval in wireless sensor systems have a widespreadapplication in areas such as security and surveillance monitoring, and commandand control in battlefields. In query-based wireless sensor systems, a user wouldissue a query and expect a response to be returned within the deadline. While theuse of fault tolerance mechanisms through redundancy improves query reliabilityin the presence of unreliable wireless communication and sensor faults, it couldcause the energy of the system to be quickly depleted. Therefore, there is aninherent tradeoff between query reliability vs. energy consumption in query -based wireless sensor systems. In this paper, we develop adaptive fault tolerantquality of service (QoS) control algorithms based on hop -by-hop data deliveryutilizing “source” and “path” redundancy, with the goal to satisfy application QoSrequirements while prolonging the lifetime of the sensor system. We develop amathematical model for the lifetime of the sensor system as a function of syste mparameters including the “source” and “path” redundancy levels utilized. Wediscover that there exists optimal “source” and “path” redundancy under whichthe lifetime of the system is maximized while satisfying application QoSrequirements. Numerical data are presented and validated through extensivesimulation, with physical interpretations given, to demonstrate the feasibility ofour algorithm design.

Page 3: Ieee final year  student projects by core ieeeprojects.com

Core IEEE Projects (Division of Conceptz)#108,5th Main, 4th Cross, Hanumanth Nagar, Basavanagudi, Bangalore-50, Website:www.coreieeeprojects.com contact: 9535052050Title: Adaptive Provisioning of Human Expertise in Service-oriented Systems

Web-based collaborations have become essential in today’s businessenvironments. Due to the availability of various SOA frameworks, Web servicesemerged as the de facto technology to realize flexible compositions of services.While most existing work focuses on the discovery a nd composition of softwarebased services, we highlight concepts for a people -centric Web. Knowledge-intensive environments clearly demand for provisioning of human expertise alongwith sharing of computing resources or business data through software -basedservices. To address these challenges, we introduce an adaptive approachallowing humans to provide their expertise through services using SOA standards,such as WSDL and SOAP. The seamless integration of humans in the SOA looptriggers numerous social implications, such as evolving expertise and driftinginterests of human service providers. Here we propose a framework that is basedon interaction monitoring techniques enabling adaptations in SOA -based socio-technical systems.

Title: Automated Certification for Compliant Cloud-based Business Processes

A key problem in the deployment of large-scale, reliable cloudcomputing concerns the difficulty to certify the compliance of business processesoperating in the cloud. Standard audit procedures such as SAS -70 and SAS- 117are hard to conduct for cloud based processes. The paper proposes a novelapproach to certify the compliance of business processes with regulatoryrequirements. The approach translates process models into their correspondingPetri net representations and checks them against requiremen ts also expressed inthis formalism. Being Based on Petri nets, the approach provides well -foundedevidence on adherence and, in case of noncompliance, indicates the possiblevulnerabilities. Keywords: Business process models, Cloud computing, Compliancecertification, Audit, Petri nets.

Page 4: Ieee final year  student projects by core ieeeprojects.com

Core IEEE Projects (Division of Conceptz)#108,5th Main, 4th Cross, Hanumanth Nagar, Basavanagudi, Bangalore-50, Website:www.coreieeeprojects.com contact: 9535052050Title: Data Integrity Proofs in Cloud Storage

Cloud computing has been envisioned as the de-facto solution to the risingstorage costs of IT Enterprises. With the high costs of data storage devices as wellas the rapid rate at which data is being generated it proves costly for enterprisesor individual users to frequently update their hardware. Apart from reduction instorage costs data outsourcing to the cloud also helps in reducing themaintenance. Cloud storage moves the user’s data to large data centers, whichare remotely located, on which user does not have any control. However, thisunique feature of the cloud poses many new security challenges which need to beclearly understood and resolved. We provide a scheme which gives a proof of dataintegrity in the cloud which the customer can employ to check the correctness ofhis data in the cloud. This proof can be agreed upon by both the cloud and thecustomer and can be incorporated in the Service level agreement (SLA).

Title: Data Leakage Detection

A data distributor has given sensitive data to a set of supposedly trusted agents(third parties). Some of the data is leaked and found in an unauthorized place(e.g., on the web or somebody’s laptop). The distributor must assess the likelihoodthat the leaked data came from one or more agents, as opposed to having beenindependently gathered by other means. We propose data allocation strategies(across the agents) that improve the probability of identifying leakages. Thesemethods do not rely on alterations of the released data (e.g., watermarks). Insome cases we can also inject “realistic but fake” data records to further improveour chances of detecting leakage and identifying the guilty party .

Title: Efficient Computation of Range Aggregates Against Uncertain Location Based Queries

In many applications, including location based services, queries may not beprecise. In this paper, we study the problem of efficiently computing rangeaggregates in a multidimensional space when the query location is uncertain.Specifically, for a query point Q whose location is uncertain and a set S of pointsin a multi-dimensional space, we want to calculate the aggregate (e.g., count,average and sum) over the subset SI of S, Q has at least probability θ within thedistance γ to p. We propose novel, efficient techniques to solve the problemfollowing the filtering-and-verification paradigm. In particular, two novelfiltering techniques are proposed to effectively and ef ficiently remove data pointsfrom verification. Our comprehensive experiments based on both real andsynthetic data demonstrate the efficiency and scalability of our techniques.

Page 5: Ieee final year  student projects by core ieeeprojects.com

Core IEEE Projects (Division of Conceptz)#108,5th Main, 4th Cross, Hanumanth Nagar, Basavanagudi, Bangalore-50, Website:www.coreieeeprojects.com contact: 9535052050Title: Enabling Public Auditability and Data Dynamics For Storage Security in Cloud Computing

Cloud Computing has been envisioned as the next -generation architecture of ITEnterprise. It moves the application software and databases to the centralizedlarge data centers, where the management of the data and services may not befully trustworthy. This unique paradigm brings about many new securitychallenges, which have not been well understood. This work studies the problemof ensuring the integrity of data storage in Cloud Computing. In particular, weconsider the task of allowing a third party auditor (TPA), on behalf of the cloudclient, to verify the integrity of the dynamic data stored in the cloud. Theintroduction of TPA eliminates the involvement of the client through the auditingof whether his data stored in the cloud is indeed intact. The support for datadynamics via the most general forms of data operation, such as blockmodification, insertion and deletion, is also a significant step toward practicality,since services in Cloud Computing are not limited to archive or backup data only.While prior works on ensuring remote data integrity often lacks the support ofeither public auditability or dynamic data operations, this paper achieves both.We first identify the difficulties and potential security problems of directextensions with fully dynamic data updates from prior works and then show howto construct an elegant verification scheme for the seamless integration of thesetwo salient features in our design. In particular, to achieve efficient datadynamics, we improve the existing proof of storage models by manipulating blocktag authentication. To support efficient handling of multiple auditing tasks, wefurther explore the technique of signature to extend our main result into a multi -user setting, where TPA can perform multiple auditing tasks simultaneously.Extensive security and performance analysis show that the proposed schemes arehighly efficient and provably secure.

Title: Exploiting Dynamic Resource Allocation for Efficient Parallel Data Processing in the Cloud

Cloud computing companies have started to integrate frameworks for paralleldata processing in their product portfolio, making it easy for customers to accessthese services and to deploy their programs. However, the processing frameworkswhich are currently used have been designed for static, homogeneous clustersetups and disregard the particular nature of a cloud. Consequently, the allocatedcompute resources may be inadequate for big parts of the submitted job andunnecessarily increase processing time and cost. In this paper we discuss theopportunities and challenges for efficient parallel data processing in clouds andpresent our research project. It is the first data processing framework toexplicitly exploit the dynamic resource allocation offered b y today’s IaaS cloudsfor both, task scheduling and execution. Particular tasks of a processing job canbe assigned to different types of virtual machines which are automaticallyinstantiated and terminated during the job execution.

Page 6: Ieee final year  student projects by core ieeeprojects.com

Core IEEE Projects (Division of Conceptz)#108,5th Main, 4th Cross, Hanumanth Nagar, Basavanagudi, Bangalore-50, Website:www.coreieeeprojects.com contact: 9535052050Title: Exploring Application-Level Semantics for Data Compression

Natural phenomena show that many creatures form large social groups and movein regular patterns. However, previous works focus on finding the movementpatterns of each single object or all objects. In this paper, we first propose anefficient distributed mining algorithm to jointly identify a group of movingobjects and discover their movement patterns in wireless sensor networks.Afterward, we propose a compression algorithm, called 2P2D, which exploits theobtained group movement patterns to reduce the amount of delivered data.The compression algorithm includes a sequence merge and an entropy reductionphases. In the sequence merge phase, we propose a Merge algorithm to merge andcompress the location data of a group of moving objects. In the entropy reductionphase, we formulate a Hit Item Replacement (HIR) problem and propose aReplace algorithm that obtains the optimal solution. Moreover, we devise threereplacement rules and derive the maximum compression ratio. The experimentalresults show that the proposed compression algorithm leverages the groupmovement patterns to reduce the amount of delivered data effectively andefficiently.

Title: Improving Aggregate Recommendation Diversity Using Ranking-Based Techniques

Recommender systems are becoming increasingly important toindividual users and businesses for providing personalized recommendations.However, while the majority of algorithms proposed in recommender systemsliterature have focused on improving recommendation accuracy, other importantaspects of recommendation quality, such as the diversity of recommendations,have often been overlooked. In this paper, we introduce and explore a number ofitem ranking techniques that can generate recommendations that havesubstantially higher aggregate diversity across all users while maintainingcomparable levels of recommendation accuracy. Comprehensive empiricalevaluation consistently shows the diversity gains of the propos ed techniquesusing several real-world rating datasets and different rating predictionalgorithms.

Page 7: Ieee final year  student projects by core ieeeprojects.com

Core IEEE Projects (Division of Conceptz)#108,5th Main, 4th Cross, Hanumanth Nagar, Basavanagudi, Bangalore-50, Website:www.coreieeeprojects.com contact: 9535052050Title: Jamming-Aware Traffic Allocation for Multiple-Path Routing Using Portfolio Selection

Multiple-path source routing protocols allow a data source node todistribute the total traffic among available paths. In this article, we consider theproblem of jamming-aware source routing in which the source node performstraffic allocation based on empirical jamming statistics at individual networknodes. We formulate this traffic allocation as a lossy network flow optimizationproblem using portfolio selection theory from financial statistics. We show that inmulti-source networks, this centralized optimization problem can be solved usinga distributed algorithm based on decomposition in network utility maximization(NUM). We demonstrate the network’s ability to estimate the impact of jammingand incorporate these estimates into the traffic allocation problem. Finally, wesimulate the achievable throughput using our proposed traffic allocation methodin several scenarios.

Title: Live Streaming with Receiver-based Peer-division Multiplexing

A number of commercial peer-to-peer systems for live streaming have beenintroduced in recent years. The behavior of these popular systems has beenextensively studied in several measurement papers. Due to the proprietary natureof these commercial systems, however, these studies have to rely on a “black -box”approach, where packet traces are collected from a sing le or a limited number ofmeasurement points, to infer various properties of traffic on the control and dataplanes. Although such studies are useful to compare different systems from end -user’s perspective, it is difficult to intuitively understand the ob served propertieswithout fully reverse-engineering the underlying systems. In this paper wedescribe the network architecture of Zattoo, one of the largest production livestreaming providers in Europe at the time of writing, and present a large -scalemeasurement study of Zattoo using data collected by the provider. To highlight,we found that even when the Zattoo system was heavily loaded with as high as20,000 concurrent users on a single overlay, the median channel join delayremained less than 2 to 5 seconds, and that, for a majority of users, the streamedsignal lags over-the-air broadcast signal by no more than 3 seconds.

Title: Monitoring Service Systems from a Language-Action PerspectiveThe Exponential growth in the global economy is being sup ported by servicesystems, realized by recasting mission-critical application services accessedacross organizational boundaries. Language-Action Perspective (LAP) is basedupon the notion as proposed that "expert behavior requires an exquisitesensitivity to context and that such sensitivity is more in the realm of the humanthan in that of the artificial.

Page 8: Ieee final year  student projects by core ieeeprojects.com

Core IEEE Projects (Division of Conceptz)#108,5th Main, 4th Cross, Hanumanth Nagar, Basavanagudi, Bangalore-50, Website:www.coreieeeprojects.com contact: 9535052050Business processes are increasingly distributed and open, making them prone tofailure. Monitoring is, therefore, an important concern not only for the processesthemselves but also for the services that comprise these processes. We present aframework for multilevel monitoring of these service systems. It formalizesinteraction protocols, policies, and commitments that account for standard andextended effects following the language-action perspective, and allowsspecification of goals and monitors at vari ed abstraction levels. We demonstratehow the framework can be implemented and evaluate it with multiple scenarioslike between merchant and customer transaction that include specifying andmonitoring open-service policy commitments.

Title: Network Coding Based Privacy Preservation against Traffic Analysis in Multi-hop WirelessNetworks

Privacy threat is one of the critical issues in multihop wirelessnetworks, where attacks such as traffic analysis and flow tracing can be easi lylaunched by a malicious adversary due to the open wireless medium. Networkcoding has the potential to thwart these attacks since the coding/mixingoperation is encouraged at intermediate nodes. However, the simple deploymentof network coding cannot achieve the goal once enough packets are collected bythe adversaries. On the other hand, the coding/mixing nature precludes thefeasibility of employing the existing privacy -preserving techniques, such as OnionRouting. In this paper, we propose a novel network coding based privacy-preserving scheme against traffic analysis in multichip wireless networks. Withhomomorphism encryption, the proposed scheme offers two significant privacy -preserving features, packet flow intractability and message contentconfidentiality, for efficiently thwarting the traffic analysis attacks. Moreover,the proposed scheme keeps the random coding feature. Theoretical analysis andsimulative evaluation demonstrate the validity and efficiency of the proposedscheme.

Page 9: Ieee final year  student projects by core ieeeprojects.com

Core IEEE Projects (Division of Conceptz)#108,5th Main, 4th Cross, Hanumanth Nagar, Basavanagudi, Bangalore-50, Website:www.coreieeeprojects.com contact: 9535052050Title: One Size Does Not Fit All Towards User- and Query-Dependent Ranking For Web Databases

With the emergence of the deep Web, searching Web databases in domains suchas vehicles, real estate, etc. has become a routine task. One of the problems in thiscontext is ranking the results of a user query. Earlier approaches for addressingthis problem have used frequencies of database values, query logs, and userprofiles. A common thread in most of these approaches is that ranking is done ina user- and/or query-independent manner. This paper proposes a novel query-and user-dependent approach for ranking query results in Web databases. Wepresent a ranking model, based on two complementary notions of user and querysimilarity, to derive a ranking function for a given user query. This fu nction isacquired from a sparse workload comprising of several such ranking functionsderived for various user-query pairs. The model is based on the intuition thatsimilar users display comparable ranking preferences over the results of similarqueries. We define these similarities formally in alternative ways and discusstheir effectiveness analytically and experimentally over two distinct Webdatabases.

Title: Optimal service pricing for a cloud cache

Cloud applications that offer data management services are emerging. Suchclouds support caching of data in order to provide quality query services. Theusers can query the cloud data, paying the price for the infrastructure they use.Cloud management necessitates an economy that manages the service o f multipleusers in an efficient, but also , resource economic way that allows for cloud profit.Naturally, the maximization of cloud profit given some guarantees for usersatisfaction presumes an appropriate price-demand model that enables optimalpricing of query services. The model should be plausible in that it reflects thecorrelation of cache structures involved in the queries. Optimal pricing isachieved based on a dynamic pricing scheme that adapts to time changes. Thispaper proposes a novel price-demand model designed for a cloud cache and adynamic pricing scheme for queries executed in the cloud cache. The pricingsolution employs a novel method that estimates the correlations of the cacheservices in an time-efficient manner. The experimental study shows the efficiencyof the solution.

Page 10: Ieee final year  student projects by core ieeeprojects.com

Core IEEE Projects (Division of Conceptz)#108,5th Main, 4th Cross, Hanumanth Nagar, Basavanagudi, Bangalore-50, Website:www.coreieeeprojects.com contact: 9535052050Title: Optimal Stochastic Location Updates In Mobile Ad Hoc Networks

We consider the location service in a mobile ad-hoc network (MANET), whereeach node needs to maintain its location information by 1) frequently updatingits location information within its neighboring region, which is calledneighborhood update (NU), and 2) occasionally updating its location informationto certain distributed location server in the network, which is called locationserver update (LSU). The tradeoff between the operation costs in location updatesand the performance losses of the target application due to location inaccuracies(i.e., application costs) imposes a crucial question for nodes to decide the optimalstrategy to update their location information, where the optimality is in the senseof minimizing the overall costs. In this paper, we develop a stochastic sequentialdecision framework to analyze this problem. Under a Markovian mobility model,the location update decision problem is modeled as a Markov Decision Process(MDP). We first investigate the monotonicity properties of optimal NU and LSUoperations with respect to location inaccuracies under a general cost setting.Then, given a separable cost structure, we show that the location updatedecisions of NU and LSU can be independently carried out without loss ofoptimality, i.e. , a separation property. From the discovered separation property ofthe problem structure and the monotonicity properties of optimal actio ns, we findthat 1) there always exists a simple optimal threshold -based update rule for LSUoperations; 2) for NU operations, an optimal threshold -based update rule exists ina low-mobility scenario. In the case that no a priori knowledge of the MDP modelis available, we also introduce a practical model -free learning approach to find anear-optimal solution for the problem.

Title: Personalized Ontology Model for Web Information Gathering

As a model for knowledge description and formalization, ontologies are widely usedto represent user profiles in personalized web information gathering. However,when representing user profiles, many models have utilized only knowledge fromeither a global knowledge base or user local information. In this paper, apersonalized ontology model is proposed for knowledge representation andreasoning over user profiles. This model learns ontological user profiles from botha world knowledge base and user local instance repositories. The ontology model isevaluated by comparing it against benchmark models in web information gathering.The results show that this ontology model is successful .

Title: Privacy-Preserving Updates to Anonymous and Confidential Databases

Page 11: Ieee final year  student projects by core ieeeprojects.com

Core IEEE Projects (Division of Conceptz)#108,5th Main, 4th Cross, Hanumanth Nagar, Basavanagudi, Bangalore-50, Website:www.coreieeeprojects.com contact: 9535052050Suppose Alice owns a k-anonymous database and needs to determine whether herdatabase, when inserted with a tuple owned by Bob, is still k -anonymous. Also,suppose that access to the database is strictly controlled, because for exampledata are used for certain experiments that need to be maintained confidenti al.Clearly, allowing Alice to directly read the contents of the tuple breaks theprivacy of Bob (e.g., a patient’s medical record); on the other hand, theconfidentiality of the database managed by Alice is violated once Bob has accessto the contents of the database. Thus, the problem is to check whether thedatabase inserted with the tuple is still k -anonymous, without letting Alice andBob know the contents of the tuple and the database respectively. In this paper,we propose two protocols solving this problem on suppression-based andgeneralization-based k-anonymous and confidential databases. The protocols relyon well-known cryptographic assumptions, and we provide theoretical analyses toproof their soundness and experimental results to illustrate th eir efficiency.

Title: Publishing Search Logs – A Comparative Study of Privacy Guarantees

Search engine companies collect the “database of intentions”, the histories oftheir users’ search queries. These search logs are a gold mine for researchers.Search engine companies, however, are wary of publishing search logs in ordernot to disclose sensitive information. In this paper we analyze algorithms forpublishing frequent keywords, queries and clicks of a search log. We first showhow methods that achieve variants of k-anonymity are vulnerable to activeattacks. We then demonstrate that the stronger guarantee ensured by differentialprivacy unfortunately does not provide any utility for this problem. Our paperconcludes with a large experimental study using real applications where wecompare ZEALOUS and previous work that achieves k-anonymity in search logpublishing. Our results show that ZEALOUS yields comparable utility tok−anonymity while at the same time achieving much stronger privacy guarantees.

Title: Robust Correlation of Encrypted Attack Traffic through Stepping Stones by Flow Watermarking

Network based intruders seldom attack their victims directly from their owncomputer. Often, they stage their attacks through intermediate “stepping stones”in order to conceal their identity and origin. To identify the source of the attackbehind the stepping stone(s), it is necessary to correlate the incoming andoutgoing flows or connections of a stepping stone. To resist attempts atcorrelation, the attacker may encrypt or otherwise manipulate the connectiontraffic. Timing based correlation approaches have been shown to be quiteeffective in correlating encrypted connections. However, timing based correlationapproaches are subject to timing perturbations that may be deliberately

Page 12: Ieee final year  student projects by core ieeeprojects.com

Core IEEE Projects (Division of Conceptz)#108,5th Main, 4th Cross, Hanumanth Nagar, Basavanagudi, Bangalore-50, Website:www.coreieeeprojects.com contact: 9535052050introduced by the attacker at stepping stones. In this project, our watermark -based approach is “active” in thatIt embeds a unique watermark into the encrypted flows by slightly adjusting thetiming of selected packets. The unique watermark that is embedded in theencrypted flow gives us a number of advantages over passive timing basedcorrelation in resisting timing perturbations by the attacker. A two-foldmonotonically increasing compound mapping i s created and proved to yield moredistinctive visible watermarks in the watermarked image. Security protectionmeasures by parameter and mapping randomizations have also been proposed todeter attackers from illicit image recoveries.

Title: Scalable Learning of Collective Behavior

This study of collective behavior is to understand how individuals behave in asocial networking environment. Oceans of data generated by social media likeFace book, Twitter, Flicker, and YouTube present opportu nities and challenges tostudy collective behavior on a large scale. In this work, we aim to learn to predictcollective behavior in social media. In particular, given information about someindividuals, how can we infer the behavior of unobserved individu als in the samenetwork? A social-dimension-based approach has been shown effective inaddressing the heterogeneity of connections presented in social media. However,the networks in social media are normally of colossal size, involving hundreds ofthousands of actors. The scale of these networks entails scalable learning ofmodels for collective behavior prediction. To address the scalability issue, wepropose an edge-centric clustering scheme to extract sparse social dimensions.With sparse social dimensions, the proposed approach can efficiently handlenetworks of millions of actors while demonstrating a comparable predictionperformance to other non-scalable methods.

Page 13: Ieee final year  student projects by core ieeeprojects.com

Core IEEE Projects (Division of Conceptz)#108,5th Main, 4th Cross, Hanumanth Nagar, Basavanagudi, Bangalore-50, Website:www.coreieeeprojects.com contact: 9535052050Title: The Awareness Network, To Whom Should I Display My Actions And, Whose Actions Should IMonitorThe concept of awareness plays a pivotal role in research in Computer -SupportedCooperative Work. Recently, Software Engineering researchers interested in thecollaborative nature of software development have explored the implications ofthis concept in the design of software development tools. A critical aspect ofawareness is the associated coordinative work practices of displaying andmonitoring actions. This aspect concerns how colleagues monitor one another’sactions to understand how these actions impact their own work and how theydisplay their actions in such a way that others can easily monitor them whiledoing their own work. In this paper, we focus on an additional aspect ofawareness: the identification of the social actors who should be monitored andthe actors to whom their actions should be displayed. We address this aspect bypresenting software developers’ work practices based on ethnographic data fromthree different software development teams. In addition, we illustrate how thesework practices are influenced by different factors, including the organizationalsetting, the age of the project, and the software architecture. We discuss how ourresults are relevant for both CSCW and Software Engineering researchers.

Title: The Awareness Network, To Whom Should I Display My Actions And, Whose Actions Should IMonitor

This study of collective behavior is to understand how individuals behave in asocial networking environment. Oceans of data generated by social media likeFace book, Twitter, Flicker, and YouTube present opportunities and challenges tostudy collective behavior on a large scale. In this work, we aim to learn to predictcollective behavior in social media. In particular, given information about someindividuals, how can we infer the behavior of unobserved individuals in the samenetwork? A social-dimension-based approach has been shown effective inaddressing the heterogeneity of connections presented in social media. However,the networks in social media are normally of colossal size, involving hundreds ofthousands of actors. The scale of these networks entails scalable learning ofmodels for collective behavior prediction. To addres s the scalability issue, wepropose an edge-centric clustering scheme to extract sparse social dimensions.With sparse social dimensions, the proposed approach can efficiently handlenetworks of millions of actors while demonstrating a comparable predictio nperformance to other non-scalable methods.

Page 14: Ieee final year  student projects by core ieeeprojects.com

Core IEEE Projects (Division of Conceptz)#108,5th Main, 4th Cross, Hanumanth Nagar, Basavanagudi, Bangalore-50, Website:www.coreieeeprojects.com contact: 9535052050Title: Throughput Optimization in High Speed Downlink Packet Access (HSDPA)In this paper, we investigate throughput optimization in High Speed DownlinkPacket Access (HSDPA). Specifically, we propose offl ine and online algorithms foradjusting the Channel Quality Indicator (CQI) used by the network to scheduledata transmission. In the offline algorithm, a given target BLER is achieved byadjusting CQI based on ACK/NAK history. By sweeping through differen t targetBLERs, we can find the throughput optimal BLER offline. This algorithm could beused not only to optimize throughput but also to enable fair resource allocationamong mobile users in HSDPA. In the online algorithm, the CQI offset is adaptedusing an estimated short term throughput gradient without specifying a targetBLER. An adaptive stepsize mechanism is proposed to track temporal variation ofthe environment. We investigate convergence behavior of both algorithms.Simulation results show that the proposed offline algorithm can achieve the giventarget BLER with good accuracy. Both algorithms yield up to 30% HSDPAthroughput improvement over that with 10% target BLER.

Title: USHER Improving Data Quality with Dynamic FormsData quality is a critical problem in modern databases. Data entry forms presentthe first and arguably best opportunity for detecting and mitigating errors, butthere has been little research into automatic methods for improving data qualityat entry time. In this paper, we propose USHER, an end-to-end system for formdesign, entry, and data quality assurance. Using previous form submissions,USHER learns a probabilistic model over the questions of the form. USHER thenapplies this model at every step of the data entry process to improve data quality.Before entry, it induces a form layout that captures the most important datavalues of a form instance as quickly as possible. During entry, it dynamicallyadapts the form to the values being entered, and enab les real-time feedback toguide the data enterer toward their intended values. After entry, it re -asksquestions that it deems likely to have been entered incorrectly. We evaluate allthree components of USHER using two real-world data sets. Our resultsdemonstrate that each component has the potential to improve data qualityconsiderably, at a reduced cost when compared to current practice.