Thursday, 28 September 2017

Data Collection Vs Data Validation

Whether your company is a start up or well established, accurate inventory control is a key issue. And, an integral part of an inventory control system is barcodes. The concept of using barcodes is familiar in our daily lives. However, without a good understanding of what a barcode is and how it works, its application in an inventory environment may be daunting.

A barcode in its simplest form is just another type of language. Most common barcode labels consist of the actual barcode (scanner readable) and words or numbers (human readable). A barcode does not intrinsically hold any additional information. However, the barcode plays a key function in inventory control because it allows a scanner to read the item number or SKU (Stock Keeping Unit) associated with a piece of inventory.

Regarding inventory control, it is common for a business to have what appears on the surface to be one main stumbling block. For example, your business seems to be accurate in recording the inventory received, but has trouble shipping the correct quantity or item to your customer. This is when the concept of data collection (spreadsheet) vs. data validation (database) comes into focus.

If we look at the example above from a data collection perspective, only the picking and shipping process needs to be corrected. We will assume, for this example, that the inventory we are receiving contains an existing manufacturer's barcode label. A person picking an order and collecting data with a barcode scanner will have the ability to record things such as the item that was picked, item quantity, a date and time, etc. This will allow someone at a later time to review the information in a spreadsheet and possibly pinpoint why errors occur during picking. Note that this method does not correct any behavior during the picking process nor does it take into account the total inventory process.

We will now look at the same example from a data validation perspective. For this process, we need to address the total inventory and initial set up, and not just the picking process. A relational database would be created to use the manufacturer's item numbers. Through the use of a database, you can store item information like minimum/maximum/reorder quantities and whether lot numbers or serial numbers are required; additionally, you are able to track vendor information, purchase orders, and sales orders and store them against the item number. This process would require receiving the inventory to a location in a quantity with a predefined inbound order. This normally correlates to a Purchase Order.

With data validation the person receiving the inventory can be prompted if the wrong item or quantity is received against an order and it can be addressed immediately instead of at a later date. Now that inventory has been received and put away we can pick in the same manner. A predefined picking order will direct the user to the proper location for the correct item in the correct quantity. This usually relates to a sales order or work order. Again, the relational database allows for immediate correction during the picking process.


Article Source: https://ezinearticles.com/?Data-Collection-Vs-Data-Validation&id=6215578

Tuesday, 26 September 2017

Web Data Extraction

The Internet as we know today is a repository of information that can be accessed across geographical societies. In just over two decades, the Web has moved from a university curiosity to a fundamental research, marketing and communications vehicle that impinges upon the everyday life of most people in all over the world. It is accessed by over 16% of the population of the world spanning over 233 countries.

As the amount of information on the Web grows, that information becomes ever harder to keep track of and use. Compounding the matter is this information is spread over billions of Web pages, each with its own independent structure and format. So how do you find the information you're looking for in a useful format - and do it quickly and easily without breaking the bank?

Search Isn't Enough

Search engines are a big help, but they can do only part of the work, and they are hard-pressed to keep up with daily changes. For all the power of Google and its kin, all that search engines can do is locate information and point to it. They go only two or three levels deep into a Web site to find information and then return URLs. Search Engines cannot retrieve information from deep-web, information that is available only after filling in some sort of registration form and logging, and store it in a desirable format. In order to save the information in a desirable format or a particular application, after using the search engine to locate data, you still have to do the following tasks to capture the information you need:

· Scan the content until you find the information.

· Mark the information (usually by highlighting with a mouse).

· Switch to another application (such as a spreadsheet, database or word processor).

· Paste the information into that application.

Its not all copy and paste

Consider the scenario of a company is looking to build up an email marketing list of over 100,000 thousand names and email addresses from a public group. It will take up over 28 man-hours if the person manages to copy and paste the Name and Email in 1 second, translating to over $500 in wages only, not to mention the other costs associated with it. Time involved in copying a record is directly proportion to the number of fields of data that has to copy/pasted.

Is there any Alternative to copy-paste?

A better solution, especially for companies that are aiming to exploit a broad swath of data about markets or competitors available on the Internet, lies with usage of custom Web harvesting software and tools.

Web harvesting software automatically extracts information from the Web and picks up where search engines leave off, doing the work the search engine can't. Extraction tools automate the reading, the copying and pasting necessary to collect information for further use. The software mimics the human interaction with the website and gathers data in a manner as if the website is being browsed. Web Harvesting software only navigate the website to locate, filter and copy the required data at much higher speeds that is humanly possible. Advanced software even able to browse the website and gather data silently without leaving the footprints of access.

The next article of this series will give more details about how such softwares and uncover some myths on web harvesting.


Article Source: http://EzineArticles.com/expert/Thomas_Tuke/5484

Friday, 15 September 2017

Data Collection Techniques for a Successful Thesis

Irrespective of the grade of the topic and the subject of research you have chosen, basic requirement and process of all remains same i.e. "research". Re-search in itself means searching on a searched content and this involves some proven fact along with some practical figures reflecting the authenticity and reliability of the study. These facts and figures which are required to prove the fundamentals of study are known as "data's".

These data's are collected according to the demand of research topic and its study undertaken. Also their collection techniques vary along with the topic in detail for example if the topic is like "Changing era of HR policies", the demanded data would be subjective and its technique thus depends on the same. Whereas if the topic is like "Causes of performance appraisal", then the demanded data would be objective and in the terms of figures which shows different parameters, reasons and factors affecting performance appraisal of different number of employees. So, let's have a broader look on the different data collection techniques which gives a reliable ground to your research -

• Primary Technique - Here, the data is collected by the first hand source directly are known as primary data's. Self-analysis is a sub classification of primary data collection - As understood; here you get self-response for a set of questions or a study. For example - personal in-depth interviews and questionnaires are self-analyzed data collection techniques, but its limitation lies in the fact that self-response can be sometimes biased or even confused. On the other, hand the advantage is in the court of most updated data as it is directly collected from the source.

• Secondary Technique - In this technique the data is collected from the pre-collected resources they are called as secondary data's. Data's are collected from articles, bulletins, annual reports, journals, published papers, government and non-government documents and case studies. Limitation of these is that they may not be the updated one or may be manipulated as it is not collected by the researcher itself.

Secondary data is easy to collect as they are pre-collected and are preferred when there is lack of time whereas primary data's are tough to amass. Thus, if researcher wants to bring up to date, reliable and factual data's they should prefer primary source of collection. But, these data collection techniques vary according to problem generated in the thesis. Hence, go through the demands of your thesis first before indulging yourself into data collection.

Source: http://ezinearticles.com/?Data-Collection-Techniques-for-a-Successful-Thesis&id=9178754