Competitor price monitoring: necessary research or waste of time?

Surely if you price your products and services according to their branding, their worth, then they’ll fly off the shelves…?

Obviously, I’m playing devil’s advocate and to be more specific, obviously, this is not true. Competitor prices are what create value in your market sphere. So you have to look at them hard if you’re going to find out what you should be charging for your own product.

This doesn’t always mean you should be charging less than your competition mind. If you are positioning yourself as an upper-level entrant into a particular market sector, you should charge more than the competitor who sells mid-level products in the same sphere.

I’ll use stereo equipment as an example. Because it has distinct range definitions and I know quite a lot about it too (come round my house on a Saturday and you won’t hear silence I can tell you!).

So, stereos basically fall into three categories – entry level, mid-range, and high end. An entry level stereo is composed of components costing between £100 and £300, with speakers weighing in at between £250 and £500.

A mid-range stereo system is composed of components costing between £300 and £700. And has speakers that cost roughly £1,000 per pair.

A high end stereo may have components costing more than £1,000 each: and speakers costing at least £1,500 per speaker – so £3,000 for the pair.

Obviously in these three bands there’s quite a range of pricing going on. And obviously everyone making stereo equipment at each level wants to grab their share of the market. So how does competitor price monitoring work here?

The premium factor in any stereo purchase is sound quality. If you are not interested in sound quality you’re not buying stereo components – you buy an iPod or something like that. So its like that every single person buying this stuff wants a sound experience noticeably better than listening to MP3s or car stereos.

So the pricing strategy is to deliver sound quality at an existing benchmark, but for less (when you are an entry level stereo unit); and to deliver better sound quality, for more money, at the top end of the ranges.

Competitor price monitoring must, then, look at the price of equivalent quality stereo kit and make a decision. Drop your price or change your design to justify raising it.

One of the most popular areas of stereo design is high end entry level – the £300 per unit equipment. Up at this price range you wouldn’t traditionally expect the sound quality of an £800 unit. So the big name stereo equipment makers started coming out with stripped down versions of much higher spec stuff – delivering £800 sound quality at £300 levels.

How does this then sit with the £800 and upwards market? Now competitor price monitoring for mid-range stereo equipment must look at the branding as well as the actual quality of the £800 stuff. And the answer is that you get more sensitivity and style for your money. At least that’s what the branding tells us!

About Author: Kristina Louis is a freelance content writer by profession and write articles on behalf of Competitor Price Monitoring. Business,and Internet technology are her topic of interest and she find immense pleasure in writing article on Business.

0 comments

Understanding HDFS Architecture – Part 2

This is a second post in continuation to “Understanding HDFS Architecture”. In this post we will discuss basic components of HDFS.

Namenode

Namenode acts as master in HDFS. It stores file system metadata and transaction log of changes happening in file system. Namenode does not store actual file data.

Namenode also maintains block map report sent by individual data nodes. Whenever any client wants to perform any operation on file. It contacts Namenode, which responds to this request by providing block map and Datanode information.

Datanode

Datanode is the actual storage component in HDFS. Datanode store data in HDFS file system. A typical production HDFS cluster has one Namenode and multiple Datanodes.

Datanodes talks to Namenode in the form of heartbeats to let Namenode know that particular Datanode is alive and Block report which consists of list of data block held by that particular Datanode. Datanodes also talks to other Datanode directly for data replication.

Checkpoint Node

HDFS stores its namespace and file system transaction log in FsImage and EditLog files on Namenode local disk. When Namenode starts-up, changes recorded in EditLog are merged with FsImage, So that HDFS always have up-to date file system metadata. After merging the changes from EditLog to FsImage, HDFS removes the old FsImage copy and replaces it with newer one as it has new updated FsImage which represents current state of HDFS and then it opens up new EditLog.

In any HDFS instance, Namenode is the single point of failure because if Namenode maintains the namespace and Editlog, if these files are corrupted or lost, whole cluster will go down. To avoid this, multiple copies of FsImage and EditLog can be maintained on different machine using checkpoint node.

Checkpoint node creates the periodic checkpoints of namespace and edit log. Checkpoint node downloads the latest copies of FsImage and EditLog from active Namenode, stores them locally, merges them and uploads back to active Namenode.

A true production Hadoop cluster should have checkpoint node running on different machine which is of same configuration like active namenode in terms of memory.

The Checkpoint node stores the latest checkpoint in a directory which has same structure as the NameNode’s directory. This allows the checkpointed image to be always available for reading by the NameNode if necessary. It is possible to have multiple checkpoint node in a cluster. This can be specified in HDFS configuration file.

HDFS Architecture

Backup Node

Backup node works the same way like checkpoint. Backup node provides checkpoint functionality and in addition to this it also maintains updated copies of file system namespace in its memory. This in-memory copy is in synchronized with Namenode. Backup node applies the EditLog changes to in-memory copy namespace and stores it on disk. This way backup node always have up to date copies of EditLog and FsImage on disk and in memory.

In contrast with checkpoint node, where checkpoint node needs to download the copies of FsImage and EditLog, Backup node does not need to download these copies, as it always have updated copy of namespace in its memory. It only needs to apply latest EditLog changes to in-memory namespace and stores the copies of FsImage and Edit log on its local disk. Due to this backup node checkpoint process is more efficient than checkpoint node.

Backup node memory requirement is same as Namenode as it needs to maintain namespace in memory like Namenode. You can have only one backup node (multiple backup node are not supported at this point of time) and no checkpoint node can run, when backup node is running. Means you can have either backup node or checkpoint node, not both at a time.

Since backup node can maintain the copies of namespace in memory, you can start Namenode in a such a way that Namenode no longer needs to maintain namespace in its own memory, Namenode node can delegate this tack to backup node. In such case Namenode will import the namespace from backup node memory whenever it requires namespace. This can be done by starting Namenode with –importCheckpoint option.

0 comments

Understanding HDFS Architecture – Part 1

HDFS is a distributed file systems used in Hadoop ecosystem. HDFS is similar to any other distributed file system however it also posses other features which makes it very different from any other distributed file system. The best part of HDFS is that it does not require server class hardware, it can run on low cost commodity hardware. HDFS is more suitable for application which works on large data set in batch mode. When we say large dataset we are talking about a file of size more than 1 GB to Pettabytes.

Features HDFS

Quick Recovery from Hardware Failure

Typical HDFS cluster consists of number of machines. These machines may fail without giving any warning. In such cases, HDFS should be able to detect the fault in hardware and quickly recover the data stored on failed hardware. This is the architectural goal of HDFS.

Large Data Set

HDFS is meant for large data sets. When we say large data sets it means a typical file on HDFS may be more than GB to TB size. HDFS is build to handle such large data files and is easily scalable to hundreds of nodes in cluster supporting large data growth at high rate.

Simple Data Coherency Model

HDFS follows a write-once-read-many-time model. Means a files once written and closed will never be updated. This simple data coherency model enable high throughput for data access. For example consider a Weblog where log files are written only once at given point of time.

Fast Large Data Set Computation

HDFS moves data computation near to data instead of getting data on a central place and then performing computation. This makes data computation very fast as it happens in distributed mode and on hardware where data is stored. This also minimizes network congestion as only computed values will travel through network instead of raw data and its very beneficial when you have very large data set.

Heterogeneous Hardware Portability

HDFS is portable across various types of hardware which makes it more tempting in terms of cost and maintenance.

HDFS Architecture Introduction

HDFS is a distributed files systems and follows master-slave architecture. HDFS uses cluster to store and manage data. HDFS cluster consists of one master server and number of slave servers. A Master server acts as name node which manages file systems namespace and access to files stored on file system by clients whereas slave servers contains data nodes which actually stores the data files.

How data is stored on HDFS?

Whenever a files/data needs to be stored on HDFS, files is internally divided in blocks and these block gets assigned to data node. Datanodes then performs read, write, block creation and deletion operation as instructed by name node.

Namenode performs the file systems namespace operations like opening, closing and renaming files and directories on HDFS file system. Namenode is also responsible for storing and managing file system metadata.

Namenode Tasks:

  • Storing and managing file system metadata
  • Mapping blocks to Datanodes
  • File system namespace operations like opening, closing and renaming files and directories

Datanode Tasks:

  • Write, Read, Create Blocks and delete blocks
  • Read and write as per Namenode instruction

HDFS Architecture _

2 comments

Announcing BIDW Q&A Community Forum Launch

We are very please to announce that we have launched BIDW Q&A Community. Exclusive professional Q&A portal for Business Intelligence, Data Warehousing and Data Professional.

This portal has following features.

Get Answers from Exprts
Contribute to Wiki on BIDW Topics by Asking Intelligent Questions.
Answer Questions/Ask Questions
Mark Questions to Include it in Community Wiki
etc..

This community should help to know latest happening in BIDW work and connect with other experts to find solution.

Let’s Join BIDW Q&A Community.

Please spread the word

0 comments

What is BigData?

BigData, this is the next big thing for in Information technology. As per McKinsey it is the next frontier for next innovation, completion and productivity. But what really means a big data is the question. Is it the volume of data which makes the data, BigData? In this article we will try to understand this term.

What is  Big Data

There are total 2.1+ Billions internet users in the world and the number is growing every day. And these users are spending lot of time on accessing social networking sites, purchasing their fourite products on internet. More and more companies are bringing their product and service offering online making internet a virtual world. With all this lots of data is getting generated. Images, CRM data, website access log and in this competitive and data oriented world this data is very important as it reveals current performance of organization, profile of customer and many interesting things. However it does not mean data generated online is BigData. Data generated through offline application is also part of bigdata.

However what made data a BigData and why everyone suddenly started talking about it.

Gone are the days of GB’s now terabyte of data is pretty normal and pettabyte is soon to be normal. With this explosion of data, tools and technologies which were previously worked well with data are finding it difficult to handle this explosion of data and analysis of this data is taking lot of time and also increasing cost. This need of new data analysis technology and tool which could handle such big volume of data coined new term called BigData which simply signals towards volume of data. However there other characteristic which would help understand the BigData.

In a simplistic manner it would be safe to say that data set which is big in volume and makes analysis tasks difficult due to its volume is BigData.

BigData can be structures like CRM data, transaction data or it could be images, Geospatial data or web logs which are so big that a log analysis tools are not able to provide analysis results in expected time and demanding more and more resources for analysis.

Structured and/or unstructured data which is very big in terms of volume and growing substantially day by day making data analysis a resource intensive and difficult task is big data.

Examples of BigData

Weblog generated by social networking sites
Geospatial data
Very Big CRM data file

0 comments