Pages
Translate
Search
Tuesday, May 29, 2012
Payment Gateway and Accepting Credit Cards Online - The Best Solution for Your Business
An often overlooked and under analyzed segment of building your eCommerce business is the backend processing of your orders. Entrepreneurs invest lots of money and time into making sure their site design is just right, but often gloss right over their order processing systems. Invest a fractional of your time spent in making design tweaks into choosing the right payment gateway, merchant processor, and bank account, and you will save yourself a lot of money!
Payment Gateways
Quite simply, a payment gateway is the system used to transmit your customer's payment information from your secure website to your secure merchant processor. Think of it as the terminal that collects, encrypts, and securely transmits the data to your merchant account. There are many different services to choose from when picking your payment gateway, although, it is important to know that the gateway you choose must be compatible with your eCommerce solution. PLEASE be sure to get a list of the different gateways your eCommerce solution accepts, and contact each one to learn of their rates and service offerings.
According to a 2009 Internet Retailer report, the 3 most commonly used payment gateway providers by the top 500 eCommerce websites are:
- Chase Paymentech Solutions LLC. (113 of the Top 500)
- PayPal Inc. (75 of the Top 500)
- Cybersource Corp. (45 of the Top 500)
All-In-One (Payment Gateway and Merchant Processor)
PayPal (and other bundled solutions) offer an all-in-one service where you get the payment gateway and the merchant processor together. The advantage here is that you do not have to manage two separate accounts. Rates, however are usually on the higher end of the spectrum.
For example, one of PayPal's services boasts a flat rate (for national sales) based upon your sales volume. The more you sell, the less they charge you to process the transaction. The benefit here is that regardless of which credit card is used (MasterCard, Visa, Discover, or the dreaded American Express), or whether the card is qualified, you get charged the same flat rate. This is unique to PayPal and other all-in-on services.
Merchant Processors
The payment gateway transmits the encrypted billing data to your merchant processor who is then responsible for routing this data to the credit card network. The credit card network verifies that your customer's credit card is valid/has enough funds to cover the transaction, then notifies the payment gateway, which then communicates with your eCommerce solution. If the transaction is approved, then the merchant processor will transmit your settled orders to your bank account (sometimes this requires a manual process).
The merchant processor is the behind the scenes system that communicates with the payment gateway, your customers credit card network, and your bank account. This is a streamlined way to accept credit cards online. It's important to know whether your payment gateway, merchant processor, bank account, and eCommerce solution all work together. Please make sure your merchant processor interfaces with your payment gateway and your bank account!
What to Know
Payment Gateway's - when choosing a payment gateway verify and review the following:
- Gateway Setup Fee - many payment gateways will require an initial payment to configure your gateway.
- Monthly Gateway Fee - this is an ongoing fee for the privilege of using the payment gateway
- Per Transaction Fee - every transaction made gets charged a fee. This also includes; refunds, voids, and declines.
- Batch Fee - if you choose to settle up your transactions each day, then you will be charged this fee on a daily basis.
- API Integration - make sure your websites shopping cart can integrate with the gateway of choice.
When reviewing this data make sure that you understand all the fee's and requirements. Also remember that you can negotiate pretty much all these items (if you are processing a lot of orders). It's definitely worth a shot to call and try to get the best rate you can! For example, Authorize.net had a package for high volume sites where they charged $50 a month, but provided 2,000 free transactions plus.07 per transaction thereafter. Added up over time, you can save thousands of dollars per year!
Merchant Processors - when choosing a merchant processor verify and review the following:
- Setup Fee - same as above
- Monthly Fee - same as above
- Per Transaction Fee - same as above
- Contract - same as above
- Qualified Discount Rate - this is a very tricky fee to track. The Qualified rate is for specific credit cards, and credit card types.
- Non-Qualified Rate - understand which credit cards do not qualify as the discount rate so you can crunch the numbers. This fee can be as much as double your discount rate.
- Minimum Processing Fee - some merchant accounts will require a minimum monthly transaction threshold. If you don't meet this threshold, you are charged another fee.
- Order Refund/Chargeback Fee - when orders need to be refunded, or are charged backed, you are usually going to be charged another fee for this.
- International Fee- check the rates for customer orders outside of the United States to see if you are charged extra.
Services like PayPal charge a flat percentage of the transaction (usually around 2.9% depending on volume), plus the per transaction fee. Most merchant processors charge in the range of 2.2% - 2.65%.
There is often a debate which is the best solution for eCommerce credit card processing.
What do you recommend when it comes to payment gateways and merchant accounts?
Copyright (c) 2011 Mike Hawk
For more information about Mike, or to find out how your business can can benefit from accepting credit cards online or at a place of business visit Merchant Perfect at http://www.merchantperfect.com
VoIP Gateways
What is a VoIP gateway?
A VoIP gateway is a piece of Voice over Internet Protocol equipment that utilizes IP (Internet Protocol) communications technology to interface TDM networks (PSTN), traditional telephones and PBX systems with Ethernet based networks.
Gateways allow businesses to:
- IP enable an existing PBX system
- Connect traditional telephones to their VoIP phone system
- Send/receive calls through the PSTN using a VoIP phone system
There are additional uses for gateway,s but these are the primary ones.
How does a VoIP gateway work?
VoIP gateways basic function is to convert analog voice streams into digital voice packets for transport across a network and the Internet. Voice over IP gateways also perform the opposite of this function.
This means that gateways can also convert digital voice packets into analog voice streams for transport across the PSTN or for traditional telephones.
To do this, gateways use a combination of specialized voice codecs and VoIP protocols. The specific IP communications technology that is utilized is dependent upon the VoIP gateway used and other aspects of the complete set-up.
Why would you use a VoIP gateway?
There are three main reasons that you would want to use a Voice over Internet Protocol gateway:
- You want to utilize VoIP service with your existing PBX system
- You want to use your existing telephones with a new VoIP phone system
- You want to have the PSTN as a failover should your network or VoIP service goes down
In addition to these three main reasons, many businesses use a gateway to:
- Connect remote office telephones to their central office VoIP system
- Connect a branch office key system to a centralized VoIP system
For other gateway uses, consult a VoIP gateway manufacturer or certified supplier.
How much do gateways cost and where can you purchase one?
The prices for VoIP gateways vary greatly. Lower end configurations can be had for around $199, while higher end, more robust configurations can cost over $3,000 USD. Like a VoIP equipment, you get what you pay for, so make sure to set a reasonable for a brand like AudioCodes or Quintum.
Finding a place to purchase a VoIP gateway is pretty easy.
The two ways people to purchase an IP gateway are through a VoIP service provider (typically a SIP trunking provider) or through a VoIP equipment supplier (like VoIP Supply). If you opt to purchase your VoIP gateway through a VoIP equipment supplier, make sure it works with your desired VoIP service provider before pulling the trigger.
Garrett Smith is a VoIP industry expert, thought leader and current Director of Marketing and Business Development at VoIP Supply, the leading supplier of VoIP Systems and VoIP Gateways in North America since 2002.
Discount Gateway Laptops - How to Find Cheap Gateway Laptops
Finding cheap gateway laptops (also called discount) today is not as hard as you think, You just have to know where to looking for them and what to look for. Gateway will not tell customers about this secret, as they want you to buy unnecessary expensive laptops. A discount gateway laptop will save you thousands of dollars, keeping some extra cash in your pocket. Discount gateway laptops all have the same updated and new gadgets and software that brand new laptops have and come out with. You will not be able to tell the difference between your discount gateway laptop and those laptops that cost thousands of dollars. This article provides information on finding and a link to where you can buy high quality cheap gateway laptops.
The key is to buy refurbished gateway laptops
If you don't know what a refurbished gateway laptop is, I'll get to the point and tell you what exactly it is and why you'll come out a winner buying a refurbished gateway laptop. Refurbished gateway laptops are laptops that have been repaired, cleaned, and restored back to its original state. Most refurbished gateway laptops are actually brand new laptops that needed to be repaired, and once restored it is put back up for sale, for a much lower cost than the original price! Now you might wonder "OK if everything is fixed, why they can't sell at the original price?" Well the answer simply is that companies can not resale it at the original price because that would be an illegal and unethical thing for the company to do. So when you buy a refurbished gateway laptop, you are practically buying a brand new laptop for a much lower price!
How are refurbished gateway laptops reconditioned
Every laptop being refurbished must go through and pass a mandatory thorough examination. The laptop will be repaired, rid of any defects and fully restored. Gateway or an professional company will repackage the laptop. The laptop is now back to a "brand new" state as any new laptop! Now in order for the companies to sell the laptop, the laptop must be sold at a lower price. Your cheap gateway laptop will have the same full warranty as any new laptop will have. The refurbished gateway laptop can do anything these over-priced expensive laptops can do.
Now you know what you should look for when searching for discount gateway laptops. If you is wondering where to begin finding cheap gateway laptops, a start would be at the end of this article you will find a link that leads to a site for finding discount gateway laptops. You can also search the Internet for cheap gateway laptops, even though you may have to search through the clutter of expensive laptops companies may try to shove in your face. If you have a nearby computer store they may also sell discount gateway laptops.
Find great deals for discount / cheap gateway laptops
Five Important Considerations You Need To Make When Selecting An SMS Gateway
Our discussion starts at the point where you are venturing in search of an SMS gateway for the sending of SMS (Short Message Service texts). This is something you could be doing either on your own initiative (the objective being to use the gateway in sending your own texts), or as part of a job assignment. Now, as you venture out in search of a SMS gateway, you will discover that there is actually a wide variety of gateways you can choose from. Yet, as a reasonable person, if you select one gateway over the others, you need reasons for opting to do so; which is why you need some criteria, through which you can consider and ultimately make a good choice of a gateway. We now venture to look at five considerations, which would make good criteria for the selection of a gateway. These are actually factors you have to take into consideration, when selecting an SMS gateway - to avoid making a choice you will end up regretting:
1. Reliability: you come to realize that when you send a text message through a given SMS-gateway, you effectively entrust the said gateway with the delivery of the SMS. You also come to realize that there are actually some gateways that are so unreliable that sending texts through them is actually an act of faith (as the messages may end up being delivered or undelivered). Those are certainly not the types of gateways you need, hence the need for you to assess the various SMS-gateways you consider making use of carefully, with respect to their reliability. You may also consider checking their reviews in this respect, though it is also worth noting that there is no gateway which is a hundred percent reliable, meaning that each is bound to have some negative reviews. But those which seem to only have bitter negative reviews, and absolutely no positive reviews, may be worth avoiding.
2. Speed: there are other gateways which do, indeed, deliver texts send through them - but which take ages before doing so. Such a gateway would be undesirable, especially keeping in mind that by the very nature of their contents, some text messages need to be delivered promptly.
3. User-friendliness: there are some SMS-gateways which are so complex that it takes an Einstein to operate them properly. You certainly shouldn't select one such gateway, especially if this gateway selection is something you are doing in your official capacity, and where some of the people who may be tasked with the sending of texts through it may be 'laymen users.'
4. Cost: there are some gateways you can use for free, and then there are others that charge very substantial sums of money for their services. If you have to pay to use an SMS-gateway, ensure that you get good value for your money.
5. Security: some texts (actually most texts) are confidential in nature. You want to make use of a gateway which ensures that the text moves from the sender directly to the recipient, without opportunity for interception. You also need to avoid making use of a gateway which comes with the risk of texts ending up being delivered to the wrong people! All this is stuff you can find out, by reading the reviews of a gateway, before starting to make use of it.
The SMS gateway is a very important component required for services like the email to SMS gateway, as per the author.
Types of VoIP Gateways
VoIP gateways are a type of VoIP equipment that uses VoIP technology to convert analog voice streams to digital voice packets. Voice over IP gateways interface TDM networks (PSTN), traditional telephones and PBX systems with Ethernet based networks.
This allows companies to IP enable a legacy PBX system, connect existing telephones to a new VoIP system and or make calls through the PSTN using a VoIP phone system.
To accomplish these functions there exists a variety of different VoIP gateways.
Analog Voice over Internet Protocol Gateways
Analog VoIP gateways come in two different configurations, FXS and FXO.
- FXS Gateways - FXS (Foreign eXchange Station) gateways are primarily used to connect traditional telephones to a VoIP system. With an FXS gateway the traditional telephones plug into the FXS ports, while the gateway's Ethernet port connects it to the network (which the VoIP system is also connected to).
- FXO Gateways - FXO (Foreign eXchange Station) gateways are primarily used to connect a VoIP system to POTS (Plain Old Telephone Service) lines. With an FXO gateway the POTS lines plug into the FXO ports, while the gateway's Ethernet port connects it to the network (which the VoIP system is also connected to).
Analog IP gateways can be found in a number of configurations, from 4 ports to 48 ports. There are even combo analog VoIP gateways that feature both FXS and FXO ports.
Digital VoIP Gateways
Digital Voice over IP gateways come in four different configurations, T1, E1, J1 and BRI.
- T1 Gateways - T1 Gateways are used primarily in North America.
- E1 Gateways - E1 Gateways are used in Europe and most other parts of the world (except North America and Japan).
- J1 Gateways - J1 gateways are used in Japan.
- BRI Gateways - BRI gateways are used with ISDN service.
Digital IP gateways can be found in a number of channel configurations, from a single channel to 32 channels. Digital IP gateways usually ship in combo T1/E1 or T1/E1/J1 configurations.
If you're looking for more information, feel free to browse the other articles on this site about the topic or feel free to contact me directly.
Garrett Smith is a VoIP industry expert, thought leader and current Director of Marketing and Business Development at VoIP Supply, the leading supplier of VoIP Systems and VoIP Gateways in North America since 2002.
Sunday, May 13, 2012
Guide to Buying Hard Drives
Apart from being one of the most essential parts of your computer, hard drive storage is constantly updating, in terms of both capacity of disk space and in physical size. When it comes time to upgrade your disk storage, there are a number of factors for you to take into account. Once you've made basic decisions about size, connectivity, speed and data transfer rate, and whether you want an internal drive or external, you can search through Myshopping.com.au to find the most suitable brand, and model, and compare the prices of different vendors.
How A Hard Drive Works
Your hard drive has a number of magnetized platters connected to a spindle. The spindle spins the platters at a very fast speed while a series of read/write heads scan over them both looking for and writing information. This information is transferred via a cable system, or through a wireless connection to a hard disk controller, which in most systems is built into the motherboard, or in some systems installed as an add-in card. The information that comes from your hard drive through its controller is then made available to the components of your computer. The effectiveness of your hard drive (its performance) depends on how much of its capacity remains unused, how well organised the data is (known as fragmentation) and its data transfer rate, which in turn is dependent on its connection type and the drive's spin rate.
Internal Hard Drives
Most computers from, the most basic home models up to the most powerful servers, have an internally installed hard drive. Technology today ensures that they are all generally fast, reliable, and offer dependable storage ability. Most modern computers have installation slots and cabling to enable you to install additional hard drive. This allows you to increase your storage capacity without giving up your existing hard drive.
Internal Hard Drives
External Hard Drives
These drives are essentially the same drives as ones installed inside computers, but cased inside a protective, portable case. This is a good solution for people who work remotely and need to transport large amounts of data. If an external hard drive is your choice, make sure your computer is compatible with the interface that the hard drive uses. An add-in card, such as a FireWire card can help to increase your computer's capabilities. You can compare different brands of external hard drives simply at Myshopping.com.au and search on the connection type, or other specifications.
External Hard drives
Laptop Hard Drives
There have been many advances in miniaturization of hardware components for laptop computing, and hard drive technology is not left out of this loop. Laptop hard drives function in exactly the same way as internal hard drives on other computers, only they are designed to provide maximum storage and efficiency in the smallest possible package. For added flexibility, some laptop computers come with removable hard drives that can be easily installed and removed. However, before you buy a hard drive for your portable computer, check that the hard drive's specifications will meet the standards of your computer, as many laptop hard drives are proprietary, and are not compatible with other brands and models.
Laptop Hard Drives
Size
Your hard drive stores your operating system, its programs (games and applications), your working data, and your digital music and movies. Most new computer purchases have a minimum of 80 GB of hard disk space; many have considerably more. Hard drive space is one of those things, once you have it, you'll find ways to fill it soon enough. There is no real rule of thumb, but consider the cost per gigabyte of storage as a way to guide your purchase. If you work with large files, such as music, video and graphics, it pays to have a big storage space for your work. It may pay you to have two hard drives, one that houses all your programs and applications, and another for storing your work and projects.
You may want to compare the price of say a 160GB drive against two separate 80 GB drives. If one drive fails all is not lost. Today's hard drives however, are fairly robust pieces of equipment and providing they are not abuse, will serve you well for a long period of time.
Interface
One key distinguishing factor between hard drives is the way in which they connect to your computer. There are a number of basic types of connection schemes used with hard drives. Each connection type has a range of differences in performance.
IDE (INTEGRATED DRIVE ELECTRONICS)
This is by the most common connection methods. Because the hard drive controller is on the drive itself rather than on the motherboard, it helps to keep costs down. There different IDE standards available. Mostly, you will want to purchase the fastest possible standard that your computer can support. Most computers will support a standard that is faster than what the computer currently supports, so you can buy a faster drive, and update your computer at a later time. The different IDE standards, in order from most basic to fastest, are:
ATA (Basic). Supports up to two hard drives and features a 16-bit interface, handling transfer speeds up to 8.3 MB per second.
ATA-2 or EIDE (Enhanced IDE). Supports transfer speeds up to 13.3 MB per second.
ATA-3. A minor upgrade to ATA-2 and offers transfer speeds up to 16.6 MB per second.
Ultra-ATA (Ultra-DMA, ATA-33 or DMA-33). Dramatic speed improvements, with transfer rates up to 33 MB per second.
ATA-66. A version of ATA that doubles transfer rates up to 66 MB per second.
ATA-100. An upgrade to the ATA standard supporting transfer rates up to 100 MB per second.
ATA-133. Found mostly in AMD-based systems (not supported by Intel), with transfer rates up to 133 MB per second.
SCSI (SMALL COMPUTER SYSTEM INTERFACE)
This is the hard drive interface standard used by many high-end PCs, networks and servers, and Apple Macintosh computers, except for the earliest Macs and the newer iMacs. While some systems support SCSI controllers on their motherboards, most feature a SCSI controller add-in card. SCSI drives are usually faster and more reliable, and the SCSI interface supports the connection of many more drives than IDE. While SCSI drives come in many different standards, many of them are not compatible with one another. So it's important be know that your computer supports the drive you plan to install. The different SCSI connections are:
SCSI-1. A basic connection using a 25-pin connector, supporting transfer rates up to 4 MB per second.
SCSI-2. Uses a 50-pin connector and supports multiple devices with a transfer rate of 4MB per second.
Wide SCSI. These drives have a wider cable and a 68-pin connection that supports 16-bit data transfers.
Fast SCSI. Uses an 8-bit bus but transfers data at 10 MB Per second.
Fast Wide SCSI. Doubles both the bus (16-bit) and the data transfer rate (20 MB per second).
Ultra SCSI or Ultra Wide SCSI. Uses an 8-bit bus and transfers data at 20 MB per second.
SCSI-3. Features a 16-bit bus and transfers data at 40 MB per second.
Ultra2 SCSI. Uses an 8-bit bus and transfer data at a rate of 40 MB per second.
Wide Ultra2 SCSI. Uses a 16-bit bus and supports data transfer rates of 80 MB per second.
FIREWIRE (IEEE 1394)
The FireWire standard is becoming popular in portable hard drives because it can be connected and removed without having to reboot the computer. It supports data transfer rates of 50 MB per second, which means it is ideal for video, audio and multimedia applications. FireWire requires a dedicated add-in card and the hard drives in use require an external power source, but the interface can support up to 63 devices simultaneously.
USB 1.1 (UNIVERSAL SERIAL BUS)
Pretty much all computers today include USB ports on their motherboards. (On older model, you can install an add-in card.) USB controllers can be used to connect external hard drives, and can support as many as 127 devices simultaneously either through USB port hubs or linked in a daisy chain fashion. USB controllers do delivery power to devices connected to them, but many hard drives still use an external power source. USB is limited by its data transfer speed, the maximum rate being about at 1.5 MB per second.
USB 2.0 (HI-SPEED USB)
A more recently introduced and far better connection standard that offers backward compatibility and data transfer rates of up to 60 MB per second. USB 1.1 system can use a USB 2.0 device; it will need a USB 2.0 controller card to achieve the higher transfer rates.
FIBRE CHANNEL
Fibre Cabling is mainly used for high-bandwidth network servers and workstations, providing very fast data transfer rates (up to 106MB per second), and connection at long cabled distances, although it is expensive and you need to install a special interface card.
Spin rate
Data transfer rate is crucial to how well your computer performs for you. Apart from the connection types above, the performance of your hard drive depends on its spin rate, measured in RPM. Higher RPM generally means faster data transfer rate. The lowest spin speed that is acceptable in computing today is 5400 RPM. The common standard at present is 7200 RPM. But higher speeds are available in SCSI drives, and it is one area of computer system technology that is constantly being developed.
A larger capacity hard drive will not necessarily make your system function any faster unless you are low on available disk space with your existing drive. But a drive with Ultra ATA/100 or ATA/133 and a 7200 RPM spin rate will pretty much guarantee an improved hard drive performance.
Other considerations
CACHE
Cache (pronounces 'cash') is additional temporary memory that acts as a buffer between the system and the drive. Frequently accessed data is stored in the cache for quick access. Cache sizes vary from 512 KB up to 16 MB on some SCSI drives. The larger cache you have on your drive, the faster your drive will transfer data. If you are working with large files, such as video, images and audio files, it pays to have the largest cache you can get (8MB or more).
SEEK TIME
The data on your disk is stored in tracks and sectors and when you instruct your hard drive controller to retrieve some data, it goes looking. The seek time is a measure of how long it takes the hard drive to find a specific track on a disk. Seek times can vary slightly from disk to disk and a drive with a faster seek time will always perform better.
INTERNAL AND EXTERNAL TRANSFER RATES
These two rates tell how fast a drive actually reads the data and passes it along to the system. Internal Transfer Rate refers to the time it takes for a drives heads to read data from the platter and pass it to the drive's cache. The External Transfer Rate (sometimes called the Transfer Rate or the Burst Transfer Rate) is a measure of the time it takes to send the data from the cache all the way to the computer's memory. Naturally faster transfer rates provide better performance.
S.M.A.R.T. (Self-Monitoring Analysis and Reporting Technology)
This is a nice built-in feature in some hard drives that can help alert you to a potential hardware problem. Your computer's BIOS must support this in order for the SMART function it to work, however the drive itself will still work in a system without it.
Buying and installing a hard drive has some technical aspects that you need to take into account. Use Myshopping.com.au to compare different hard drive makes and specifications to find the drive that will work best for your needs and computer. You can compare prices and service offers from different vendors.
Andrew Gates for comparison online shopping service MyShopping.com.au. MyShopping.com.au helps you compare the different hard drives from different brands in terms of specifications and accessories. You can also compare prices from hundreds of different brands and vendors
Saturday, May 5, 2012
High Technology and Human Development
Some basic premises - often fashioned by leaders and supported by the led - exercise the collective conscience of the led in so far as they stimulate a willed development. The development is usually superior but not necessarily civilized. The premises in question are of this form: "Our level of technological advancement is second to none. Upon reaching this level, we also have to prepare our society for peace, and to guarantee the peace, technology must be revised to foster the policy of war." Technological advancement that is pushed in this direction sets a dangerous precedent for other societies that fear a threat to their respective sovereignties. They are pushed to also foster a war technology.
In the domain of civilization, this mode of development is not praiseworthy, nor is it morally justifiable. Since it is not morally justifiable, it is socially irresponsible. An inspection of the premises will reveal that it is the last one that poses a problem. The last premise is the conclusion of two preceding premises but is not in any way logically deduced. What it shows is a passionately deduced conclusion, and being so, it fails to be reckoned as a conclusion from a rationally prepared mind, at least at the time at which it was deduced.
A society that advances according to the above presuppositions - and especially according to the illogical conclusion - has transmitted the psyche of non-negotiable superiority to its people. All along, the power of passion dictates the pace of human conduct. Whether in constructive engagements or willed partnerships, the principle of equality fails to work precisely because of the superiority syndrome that grips the leader and the led. And a different society that refuses to share in the collective sensibilities or passion of such society has, by the expected logic, become a potential or actual enemy and faces confrontation on all possible fronts.
Most of what we learn about the present world, of course, via the media, is dominated by state-of-the-art technology. Societies that have the most of such technology are also, time and again, claimed to be the most advanced. It is not only their advancement that lifts them to the pinnacle of power, superiority, and fame. They can also use technology to simplify and move forward an understanding of life and nature in a different direction, a direction that tends to eliminate, as much as possible, a prior connection between life and nature that was, in many respects, mystical and unsafe. This last point does not necessarily mean that technological advancement is a mark of a superior civilization.
What we need to know is that civilization and technology are not conjugal terms. Civilized people may have an advanced technology or they may not have it. Civilization is not just a matter of science and technology or technical infrastructure, or, again, the marvel of buildings; it also has to do with the moral and mental reflexes of people as well as their level of social connectedness within their own society and beyond. It is from the general behaviour makeup of people that all forms of physical structures could be created, so too the question of science and technology. Thus, the kind of bridges, roads, buildings, heavy machinery, among others, that we can see in a society could tell, in a general way, the behavioural pattern of the people. Behavioural pattern could also tell a lot about the extent to which the natural environment has been utilized for infrastructural activities, science and technology. Above all, behavioural pattern could tell a lot about the perceptions and understanding of the people about other people.
I do believe - and, I think, most people do believe - that upon accelerating the rate of infrastructural activities and technology, the environment has to recede in its naturalness. Once advancing technology (and its attendant structures or ideas) competes with the green environment for space, this environment that houses trees, grass, flowers, all kinds of animals and fish has to shrink in size. Yet the growth of population, the relentless human craving for quality life, the need to control life without depending on the unpredictable condition of the natural environment prompt the use of technology. Technology need not pose unwarranted danger to the natural environment. It is the misuse of technology that is in question. While a society may justly utilize technology to improve quality of life, its people also have to ask: "how much technology do we need to safeguard the natural environment?" Suppose society Y blends the moderate use of technology with the natural environment in order to offset the reckless destruction of the latter, then this kind of positioning prompts the point that society Y is a lover of the principle of balance. From this principle, one can boldly conclude that society Y favours stability more than chaos, and has, therefore, the sense of moral and social responsibility. Any state-of-the-art technology points to the sophistication of the human mind, and it indicates that the natural environment has been cavalierly tamed.
If humans do not want to live at the mercy of the natural environment - which, of course, is an uncertain way of life - but according to their own predicted pace, then the use of technology is a matter of course. It would seem that the principle of balance that society Y has chosen could only be for a short while or that this is more of a make-believe position than a real one. For when the power of the human mind gratifies itself following a momentous achievement in technology, retreat, or, at best, a slow-down is quite unusual. It is as if the human mind is telling itself: "technological advancement has to accelerate without any obstruction. A retreat or a gradual process is an insult to the inquiring mind." This kind of thought process only points out the enigma of the mind, its dark side, not its finest area. And in seeking to interrogate the present mode of a certain technology according to the instructions of the mind, the role of ethics is indispensable.
Is it morally right to use this kind of technology for this kind of product? And is it morally right to use this kind of product? Both questions hint that the product or products in question are either harmful or not, environmentally friendly or not, or that they do not only cause harm directly to humans but directly to the environment too. And if, as I have stated, the purpose of technology is to improve the quality of life, then to use technology to produce products that harm both humans and the natural environment contradicts the purpose of technology, and it also falsifies an assertion that humans are rational. Furthermore, it suggests that the sophisticated level that the human mind has reached is unable to grasp the essence or rationale of quality life. In this regard, a peaceful coexistence with the natural environment would have been deserted for the sake of an unrestrained, inquiring human mind. The human mind would, as it were, become corrupted with beliefs or ideas that are untenable in any number of ways.
The advocacy that is done by environmentalists relate to the question of environmental degradation and its negative consequences on humans. They insist that there is no justification for producing high-tech products that harm both humans and the natural environment. This contention sounds persuasive. High technology may demonstrate the height of human accomplishment, but it may not point to moral and social responsibility. And to this point, the question may be asked: "In what ways can humans close the chasm between unrestrained high technology and environmental degradation?"
Too often, most modern humans tend to think that a sophisticated lifestyle is preferable to a simple one. The former is supported by the weight of high technology, the latter is mostly not. The former eases the burden of depending too much on the dictates of the natural environment, the latter does not. The latter tends to seek a symbiotic relationship with the natural environment, the former does not. Whether human comfort should come largely from an advanced technology or the natural environment is not a matter that could be easily answered. If the natural environment is shrinking due to population growth and other unavoidable causes, then advanced technology is required to alleviate the pressures to human comfort that arise. It is the irresponsible proliferation of, say, war technology, high-tech products, among others, that are in need of criticism and have to stop.
Mr. Ainsah-Mensah has worked in various capacities mostly in Canada and now in China. He is an education and race relations consultant, projects coordinator, writer, and post-secondary instructor in business courses, life skills, and critical thinking. He is currently the principal of Handan-Lilac Education Group in China.
Best Technology for Reliable Plugged Chute Detection
Myth or Reality: The Facts about Radar, and the Right Choice for Level in Solids Applications
With so many level technologies on the market today, the choice of technology is much more difficult and can be confusing. Process measurement and controls are an essential component for any industrial plant attempting to conform and abide by the strict safety and environmental regulations set forth by state agencies. Not only is it important to know what is contained within any silo or vessel, but it is vital to know whether a silo or flow area has material blocked. Whether that material is too high or low in the containment is also critical as it can cause enormous safety hazards to plant personnel as well as clean-up costs and agency fines. Additionally, installing point detection devices in transfer chutes for blockage detection is also important as it is an inexpensive way of preempting a nasty chute blockage. These transfer chutes are all over the place throughout a mining site, and one plugged chute can stop production, which incurs hundreds of thousands of dollars in downtime production costs. So with that stated, reliable continuous level measurement and redundant point level detection are an important part of any process plant, particularly at a time when improving energy efficiency and reducing operating and maintenance costs are important considerations. Plant safety and meeting stricter environmental regulations become a challenge in this tough competitive marketplace.
Many level applications pose special problems for process level equipment and technologies. Whether the industrial site is a mine, power generation facility, or cement plant, these sites all require technologies that will withstand the tough environmental conditions as well as the harsh nature of the solids applications. These include heavy dust in the airspace, steep angles of repose, high temperatures, changing process conditions, corrosive media, abrasive solids materials, and more. In addition, so many different sizes and shapes of containment mean that many installations have to deal with obstructions like mechanical bracing for structural support.
Plant personnel like reliability engineers, operations managers, facilities engineers, maintenance, and more are always looking for ways to increase throughput, reduce downtime, and improve process efficiencies. With technology on the constant cutting edge, companies are designing process instrumentation that offers many different types of techniques for providing reliable level and point level detection solutions for tough applications. In order to be successful in this instrumentation market, a company must be offering solutions that are value added to customers, and offer user friendly configuration with high accuracy and reliability in mind. With technology like it is today, upgrading of level instrumentation at a plant location from older measurement techniques to newer designs will definitely lower maintenance costs, improve process efficiency and provide higher accuracy devices, which will provide many benefits. With safety being most industrial company's number one goal, any basic level measurement must be reliable, robust and accurate and there must also be robust systems to guard against spillages from overfilling vessels.
Unfortunately, even with today's advancement in process instrumentation, there is not one technology that will provide undaunted measurement results in every application. Although, it is the technology of microwave radar that has been promoted over the last several years as the panacea for all liquid or solid level materials. Is this really the case? What has happened in this instrumentation market to the idea of providing the right engineered solution for the customer's application? Let's really look at the technologies out there for liquids and solids level measurement like through air radar, guided wave radar, ultrasonic, and what's being referred to by Hawk as acoustic wave. In applications, there are mechanical installation constraints, the conditions within the containment, and the capabilities of the level device will all affect the choice of measuring device. In the level instrumentation spectrum, there are many different technologies, but the major technology contenders are ultrasonic or acoustic wave, TDR (guided wave radar), and non-contact microwave radar. It is interesting to note too that the technology of ultrasonic or sometimes promoted as acoustic wave technology has flat lined or hit a road block in growth. The technology of microwave radar has been growing at the "speed of light" and being regarded, or at least touted as the end all beat all technology for measuring level in liquids and solids. Well, choosing the proper technology from one of these three can be a challenge, but if you're looking for high reliability, low maintenance, and repeatable performance, then look below for some guidelines on each technology.
So, when one looks at level applications, the split is either liquids or solids. With liquids, many technologies can be applied depending upon the conditions in the application (temperature, pressure, air space conditions above the liquid surface, mounting, mechanical obstructions, and more. Liquids though are not nearly as difficult to solve with level technologies as the solids materials, which can range from fine powders to chunked aggregate materials, to the worst conditions of wet, moist fine powdery material that adheres to almost anything. When it comes to the technologies of through air radar, guided wave radar, or ultrasonic or acoustic, the choice of the technology is relatively straight forward with a few exceptions. If the liquid material is water based, with virtually conditions of a non-vaporous atmosphere, and temperatures/pressures in the ambient/atmospheric range, then ultrasonic or acoustic is suitable. With microwave radar applied, the liquids are probably going to be of a chemical or hydrocarbon formulation, probably have some excessive temperatures or pressures, and have heavy vapor conditions in the airspace. Guided wave radar can be applied as well in the aforementioned conditions, with the exception maybe of the range being too lengthy for a rod or flexible cable antenna or if there is an agitator in the vessel.
But, make no mistake about the fact that when dealing with solids materials in an industrial environment like a metal or coal mine, or fly ash in a load out silo at a power generation facility, the conditions for measurement are usually much more difficult. It requires a technology that can endure the atmosphere conditions like heavy dust, undulated material surfaces, wet or moist conditions from process sprayers, and sometimes hot conditions with build-up problems on any equipment installed in the application. If the height of the material containment for level measurement is more than 30 to 40 feet, then it is more appropriate and practical to choose a non-contact level measurement technology like ultrasonic, acoustic, or microwave radar. TDR or guided wave radar can provide continuous level measurements up to 80 feet; however, in solids materials, the tensile forces and loading on the cable become extreme, and thus will potentially cause breakage and shearing. It is just not practical to outfit any solids measurement application with something of a contacting design like guided wave radar when there is any sort of build-up potential, or lengths beyond 30 feet (10 meters). Also, as material shifts from one point to another in the solids, the cable follows that line of movement. Cost also becomes a factor too for guided wave radar in long measurements as cable lengths increase, so does pricing. With level measurement in solids beyond 30 to 40 feet, it is a wiser choice to go with a non-contact technology.
So let's get down to the facts about non-contact technologies, both new and older in the market place today. The technology known as ultrasonic has been around for many years, and it is as the name implies, sub sound technology in the kilohertz frequency band. The designers of ultrasonic technology have made valiant attempts to solve the difficult solids applications with frequencies down to as low as 8 to 12 kHz and various transducer designs in size and shape, but the overall measurement success has been inconsistent at best. Then along comes non-contact microwave technology with the claims that it is the new "sexy" technology to measure the long range, dusty solids measurements. Great claims for something that performs well in dry materials, but induce moisture into the solids materials along with heavy dust, water sprayers for dust abatement, and that's a formula for disaster. This new technology is not the panacea for all level applications as many companies tout, and it definitely does not have carte blanche performance in the industries like coal, metal mining, minerals, and other solids industries. With the less than desirable results on solids using "ultrasonic" and the through air radar not capitalizing in the mining industries, what technology is out there to solve these applications? Well the overlooked technology, which is a variation on a technology theme of ultrasonic, but designed in a way to offer significant application benefits, is acoustic wave technology. The magic behind this technology is the fact that it utilizes audible frequencies (5 to 30 KHz) in a transducer design that is harnessed as a balanced resonant mass. The combination of low frequency, high applied power, and variable adaptive gain control makes this acoustic wave technology a real solids solution that can't be beat and is really underestimated. On the transducer, the low frequency with high applied pulsing power to the face creates a pressure wave that literally offers consistent and proven self-cleaning properties. Effectively, there are no materials that will adhere to this transducer face regardless of their moisture or sticky properties.
So in mining applications, where there are wet screens from sprayers, or ROM bins with dust abatement controls causing heavy build-up on anything in the area, the acoustic wave technology can reliably provide level measurement under those conditions. Microwave radar CAN Not function under these moist solids conditions as it would be disastrous with material build-up adhering to the emitter on the inside of the horn antenna. Or worse yet, adherence of moist, powdered ore fines on the face of a "dust" cover that is designed to keep material from entering the horn antenna, but does not prevent adherence on the dust cover face. Many suppliers of non-contact radar designs today will recommend the use of antenna purging with either water or air within the plant site. This purging option sounds great in design, but in reality, the air purge causes more problems than it's worth because most instrument air supplies have moisture, and this moist air will increase the chances of dust build-up on the emitter within the horn. Additionally, the instrument air is not inexpensive to supply on a regular basis.
The key to measuring solids materials in conditions where moist, wet, powders, ores, aggregate exist, then there needs to be a technology used where there are self-cleaning properties available. With acoustic wave technology, the power to the transducer with low frequency is one key design criteria, however, it takes a lot more than just that, and that's where an Australian company has led the solids measurement charge within the level industry. The long wavelength of the low frequency designs also makes them appropriate for the tough stuff. Guaranteed for high performance without fail in the worst conditions known to man, the acoustic wave technology will absolutely amaze the doubting customer, until they see in action, and "how it take a beating, yet keeps on repeating" in the measurement.
So again, choosing between non-contact acoustic wave and microwave radar for solids materials can be challenging, but there are some simple rules to keep in mind when considering the choice for the application. Remember that solids materials come in many different sizes and shapes, and regardless of the particle size, the material will be very dusty in the airspace. The method of fill and removal from the containment will also increase the dust in the airspace which can cause further deterioration of the measurement technology's signal. During fill using a dense phase pneumatic conveying system, which essentially blows the material into the silo from the top, the airspace conditions are extremely clouded, and difficult for most level technologies to perform reliably. During these conditions, the transmitted signal must be strong in power, have the right wavelength, and have the ability to penetrate the dust in the airspace without being attenuated.
For these dusty airspace conditions, let's evaluate and compare the two technologies of non-contact design and see which one is the most applicable under the toughest conditions. With microwave radar, the frequency of the device used and the antenna design is very important in how well it will perform in these dusty conditions. Non-contact microwave radar designs typically operate in the frequency band from 5.8 to 26 GHz, and some even go higher than that, with use of either pulse or FMCW technique. The technique of pulse wave radar seems to be most often used these days, and a frequency band of 24+ GHz. The correct size and type of antenna is essential when choosing this technology for solids level measurements. The antenna type should be a horn style and the size should be as large as possible, but most manufacturers offer 2 to 6 inch diameter, with some offering 10 inch parabolic dish type versions. Applying a 2 or 3 inch size horn antenna is not appropriate for solids applications, as there is not enough of a collection source at the receive area for the microwave signal. So choosing a horn diameter of 4 inch or larger is best for penetrating the dust in the airspace, as well as allowing for a better collector on the returning signals. The technology works well on measurement ranges up to 125 to 150 feet, but after that, the readings become somewhat unreliable, and usually build-up of dust becomes a major deterrent to the propagation of the microwave energy.
The application of a Teflon fabricated dust cover is applied onto the end of the horn antenna to prevent the dust from entering and build-up inside the horn. However, the dust then builds on the dust cover and over time will impede the signal regardless of its dielectric value and moisture content. Remember what was stated earlier in this article, and that is when suppliers recommend the use of purging options like air or water. Well, this is not a practical solution to removing adherence of solids particles. Suffice it to say that there are no self-cleaning properties for a microwave design and the use of these antenna purges do not work properly and they are not practical for most industrial applications. In dealing with long, dusty airspace measurement on solids, the larger parabolic horn antenna is recommended, but this horn size requires an opening of 10+ inches in diameter. Build-up though also is a realistic problem with this large antenna as it is a large surface area and again has no self-cleaning properties.
When we speak about ultrasonic technology (also acoustic wave) for use in level applications, we are talking about operating frequencies in the 40 to 5 KHz band, and sizes of 2 to 9 inches in diameter. For liquid level applications, the use of 30 to 40 KHz frequencies are suitable as the airspace conditions are not containing dust particulate, so propagation of the acoustic wave is only then affected by the vapor space. Keep in mind too, that acoustic wave technology is different than ultrasonic technology in that the application of lower frequency designs with high pulse power will create this pressure wave effect that literally atomizes any type of condensation adhering to the bottom of the transduce face. Any other ultrasonic design on the market today does not offer these cleaning values. When you are speaking about solids level applications with heavy dust in the airspace, then a low frequency of high power is absolutely essential. There are also other things to consider for the proper propagation of the acoustic wave signal in dusty conditions. The dust particles in the airspace will most assuredly attenuate or absorb the acoustic wave if not properly sized to the application. The distance of the measurement, the airspace conditions, and the mounting availability are all factors to be considered when applying the right transducer. In the case of ultrasonic technology and solids level applications, size does matter, which means that the lower frequency transducers will make the long distance shots and penetrate the dust particulate with minimal attenuation. These 5 or 10 KHz frequency acoustic wave transducers are audible in sound and have a lot of power applied to them with a variant gain scheme. The key to the performance on these difficult applications is the application of the lower frequencies.
Oversizing the transducer based on frequency and knowing the conditions in the measurement will prove to be successful. The lower frequency with power will deal with the harsh conditions of dust, build-up, and moisture in the airspace, and much more. With long range measurements beyond 50 feet and very dusty airspace conditions, the selection of the transducer frequency is important and should be at minimum, 15 KHz or lower. Remember though, it is not only the frequency for succeeding in these applications, but the power applied, the transducer design, and the dynamic gain circuit. With the right transducer selection, the next thing to consider is the build-up potential of the solids materials in the application. As we discussed in the previous paragraph with microwave radar, there are no self-cleaning properties associated with that technology, so build-up can be a factor in impeding the energy from sensor to material surface. The acoustic wave technology uses high energy applied to a crystal set which causes mechanical vibration on the transducer surface, thus resulting in a movement enough to keep solids particles of dust off of the transducer face.
This self-cleaning technique allows for proper propagation of the low frequency signal even under the dustiest of airspace conditions as no build-up will adhere to the transducer face. Also, the reliable, continuous performance of the acoustic wave system is dependent upon the adjustability of the gain circuit. As the acoustic signal decreases in amplitude, the dynamic gain circuit automatically increases gain to the signal so that there is an increase in the amplitude and the level can be maintained. This ability to vary the gain dynamically throughout the measurement proves to be a strong point when having the lower frequency and high power system also. It takes every bit of technology savvy to accomplish a reliable level measurement on solids applications.
Level measurement on liquids applications are considered to be much easier with regards to a reliable acoustic signal as compared to solids measurement on things like coal, lime, mined ores, cement, and gypsum. The choice of the right technology for these difficult solids applications does not have to be a brain teaser. Most companies are astute at assisting in the applicability of their designs, but it is important for you as the user to understand the limitations of the technologies. Below is a summary chart for the technologies discussed in this article along with others and the various conditions under which there could be exposure. It serves as a guide for the selection of technology for your application conditions.
Now for every continuous level application in your facility, you should be considering the application of a reliable point level technology. The practice of using an alternate technology point level device with a continuous level measurement should be adopted with every company. And no, it's not because the suppliers want to make or sell more product, but because it only makes logical sense. Think about it, if you have a malfunction or an application upset with your continuous device, and there is no point level shut-off for high level, then you will have a spill and that spill requires clean-up, which results in unnecessary costs, and potential fines by governmental agencies like the EPA. Additionally, these spills could also result in a safety violation with harm caused to employees or the process. In addition to the high level back-up, there should be precaution taken and applicability of a point level switch for a low level shut-off as well as point detection in a chute with solids material. Using point level technologies for back-up protection provide a high degree of cost prevention to replacing damaged pump systems, screw conveyors, valves, and other process control devices. With the cost of point level switches being anywhere from $200 to $2000 depending the severity of the application, these are relatively low cost and provide a low cost of ownership as they serve to prevent problems.
With the importance of having a point level back-up to your continuous level technology, it is wise to choose an alternate technology from what your continuous device is in the application. So for instance, if you have an acoustic wave system for measuring coal in your load out silos, then you could apply a point level technology of vibration, capacitance, rotating paddles, or microwave. With this point level in mind, there are many different technologies to choose from. The most common used for solids applications would be capacitance, vibratory forks, rotating paddles, acoustic wave, and microwave designs. With solids materials, the abrasive and heavy loading of the material can be a factor in causing more problematic issues with a point level device, especially on low level or high flowing materials, so choosing the right one is important. Other factors like build-up on the probe elements or impact from falling material can also affect the performance and reliability of the product.
The technologies of microwave and acoustic wave lend themselves to the more difficult solids applications, although the applications of both are also seeing the easy applications. These two technologies are more often seen though on the difficult applications where an indication of material absence /presence is critical in the customer's process, and therefore reliable detection is mandatory. The microwave detection technology is such that the faces of the transmit and receive sensors are across from one another over a certain short or long distance, but looking though a plastic window like Teflon. There is no contact with the material in the silo and no protrusion thus no wear and tear and reliable performance provided the material is dry. If the material has some moisture or it can be dry, then the applicability of the acoustic wave technology can be done. The beauty of this technology is the fact that it is also not protruding into the vessel and uses a very wear resistant titanium face for long lasting durability in abrasive applications. The costs for the microwave or acoustic wave design are more than conventional point level technologies like capacitance or rotating paddle wheels, but the replacement of these devices does not occur once installed in the applications. It's set up with minimal configuration, and then literally walks away with no problems after that point.
So in summary what I wanted to share with every reader is the idea that there are many technologies for measuring continuous and point level within the solids industry, but making the right choice for long term reliability, low maintenance, and high performance is where the rubber meets the road. If safety, improving process efficiency, or saving costs are your concern, then take to heart this information, and contact your local level expert or me if you'd like some guidance. And finally, let me say that the success and performance reliability of any technology is not chosen based upon its popularity, but on its capabilities to deal with adversities. Don't sell short the technologies that have been around for many years.
Jerry Boisvert ( jerry.boisvert@hawkmeasure.com, 978-5308588)
Hawk Measurement Systems
7 River Street
Middleton, Massachusetts 01949
Jerry Boisvert
HAWK Measurement Systems
978-530-8588
What Are My Fuel & Technology Choices? (Part 1)
Over the past two years, the dramatic rise in natural gas and oil prices has had a crippling effect on the bottom line of any substantial energy user who depends on these fuels. Numerous industrial and non-regulated utility clients have contacted ESI to perform preliminary engineering and feasibility studies to investigate the cost and benefit of switching to a solid fuel. The reasons are obvious. Depending upon the location, companies are forecasting their delivered natural gas pricing going forward at $9.00 to $12.00 per mmbtu. On the other hand, coal and waste wood can be delivered between $2.00 to $3.50 per mmbtu. If your low pressure saturated steam load is approximately 150,000 pph, you are going to use approximately 1.5 million mmbtus per year which means that your annual operating cost savings would range between $8 - $15 million per year.
The first question we get is generally the same. What fuels are economically available and what type of boiler should I put in to burn them? One would think that the answer to this question would be readily simple and easily defined by answering the following questions:
- What is the delivered cost of the cheapest fuel?
- What technology is best able to burn that fuel?
Unfortunately, the analysis becomes much more complex because of several factors, each of which can have a dramatic effect on the results. These factors include, but are not limited to, the following:
- How much real estate is available for the new solid fuel-fired facility?
- How is the fuel delivered, by truck or rail?
- Are long-term contracts available for the preferred fuel at attractive rates?
- How many days of on-site fuel storage are required?
- Is the proposed site a non-attainment area for any existing criteria pollutant?
- What are the key factors that need to be considered in a new air permit?
- What are the trade-offs the company wants to consider regarding capital and operating cost?
- How will the new facility affect the current process and other plant operations?
- How will back-up capacity and redundancy be addressed and provided?
- What is the required corporate return on investment?
The answers to many of these questions are interrelated which makes for a dynamic analysis. However, there are some general principles and guidelines that can help anyone contemplating a capital project for a fuel change. In this and subsequent issues of the ENERGY SOURCE, ESI will provide some basic answers to the following questions:
- What are the commercially available technologies for firing solid fuels?
- What are the differences between these technologies?
- What technology is best for a specific fuel?
- What are the differences between firing wood, coal, and other solid fuels?
- What are the critical parameters in a fuel analysis that affect the technology selection and plant design?
What Are The Commercially Available Technologies for Firing Solid Fuels?
Currently there are four primary technologies for firing solid fuels in a boiler:
- Pulverized Coal
- Stoker Fired
- Bubbling Fluid Bed
- Circulating Fluid Bed
Gasification technology is emerging as a commercially available technology; however, the current experience in this technology is in fairly small systems typically under 100 mmbtu/hr heat input. For this reason, we will not include gasification in this discussion.
Pulverized Coal Technology- A pulverized coal-fired boiler typically fires sub-bituminous and bituminous coal. These boilers range in size between 50,000 pph hour small industrial size units and large utility boilers up to 1300 MW. With this technology, mine run coal is received on-site where it is typically crushed and then pulverized to pass a minimum of 70% through a 200 mesh screen and 98.5% through a 50 mesh screen. Once sized, the coal is introduced into the furnace through air-staged low NOx burners. The technology responds very well to load swings. Units utilizing this technology have been in commercial operation since the late 1800's. PC boilers are relatively high NOx generators as compared to BFB and CFB boilers. The technology must use back end air pollution control equipment to control SOx, NOx, particulate, HCL, and mercury emissions. The capital costs for PC units larger than 150,000 pph are relatively low when compared to the newer CFB and BFB units.
Stoker Fired Technology- Stoker technology can be designed to fire a wide variety of solid fuels which include coal, wood, and other opportunity fuels alone or in combination. Stoker-fired units can vary in size from 15,000 pph to 500,000 pph. As stated, the technology is very versatile in its ability to be designed to fire various fuels; however, once a fuel is selected, the technology is much less forgiving than CFB or BFB for drastic variations in fuels. The technology is not appropriate for biomass conversion or opportunity fuels with fuel moistures in excess of 55%.
With a stoker-fired unit, fuel is metered and distributed across a horizontal surface (grate) inside the boiler with small holes that allow combustion air to pass through. The combustion air passes through the "bed" of burning material on the grate and interacts with the fuel. Additional combustion air is usually introduced above the bed to complete the combustion of any volatile gases. The boiler grate can be either a stationary or a moveable type. Stationary grates are less expensive but require periodic manual de-ashing of the grate. Moving grates come in various combinations including vibrating, chain grates, and traveling grates. All moving grates automatically remove ash from the bed. Stoker boilers can be designed to have a reasonable response to load swings when the fuel is properly sized and the boiler is properly tuned. Stoker technology is among some of the oldest technology having been in operation since the early 1800's. Stoker systems are relatively high NOx and CO generators compared to the other solid fuel technologies. The capital cost of this technology is among the lowest, especially in the smaller size ranges.
Bubbling Fluid Bed Technology- Bubbling fluid bed (BFB) technology is designed to fire relatively high moisture (over 45%) fuels with relatively low heating values. Fuels ideal for this technology include various biomass fuels, most sludges, and some waste fuels. Boiler sizes range from 30,000 pph to 500,000 pph systems. BFB fired boilers larger than this are now being considered. With this technology, fuel is introduced to a lower portion of the boiler or combustor that has been designed to hold an inert bed of material. The bed is heated to a temperature between 1400°F and 1700°F. Combustion air is introduced to the bed via fluidizing air nozzles which are located in the bed thereby fluidizing the bed and fuel mixture into a low density state much like boiling water. Additional combustion air is introduced above the bed to combust the volatile gases driven off in the bed combustion process.
Bubbling fluid bed technology is very adaptable to many fuel types as long as the moisture content does not get too low and the alkali content in the fuel does not get too high. Alkalis include potassium and sodium which can cause bed agglomeration. Ash is removed from the bed by a bed drain system that is typically screened to remove any sand taken out with the bed drains. The sand is re-injected back into the bed. Fluidized bed boilers can respond well to load swings; however, due to the high thermal mass from the bed of the boiler, they are not as responsive as stoker or pulverized coal-fired units. Fluid bed technology is among some of the newest commercially viable solid fuel combustion technology, having been in existence since the 1970's. Besides the fuel flexibility of the technology, one of the other chief advantages of this technology is the relatively low emissions. Due to the low combustion temperatures of a fluidized bed, NOx generation is very low. Another advantage is that if the fuel has high sulfur content, limestone can be introduced into the bed to control the sulfur dioxide emissions. Fluid bed technology remains a relatively high capital-intensive technology when compared to stoker-fired technology.
Circulating Fluid Bed Technology- The latest generation in commercial combustion technology is the circulating fluid bed (CFB). This technology is the most versatile when considering the various fuels which are capable of being combusted. Fuels widely ranging in HHV and moisture content from anthracite coal to biomass to petroleum coke have all been successfully fired with this technology. Boiler sizes using this technology range from 100,000 pph to over 600 MW utility boilers. Like the BFB technology, fuel is introduced into a lower portion of the boiler or combustor which has been designed to hold a bed of inert material and fuel. Using conventional burners, the bed is heated to a temperature between 1400°F and 1700°F. Combustion air is introduced to the bed via fluidizing air nozzles which are located in the bed and fluidize the bed material. However, with a CFB, the material is fluidized to the point that it actually travels up the furnace and is allowed to exit the furnace.
Once the material leaves the furnace, it is captured by either large high efficiency cyclones or multicyclones where it is recirculated back to the bottom of the furnace. Additional combustion air is introduced above the lower furnace to combust the volatile gases driven off by the combustion process. Ash is removed from the bed by a bed drain system. Circulating fluid bed boilers can respond well to load swings; however, due to the high thermal mass from the bed of the boiler, they are less responsive than pulverized coal-fired units. CFB technology is among the newest commercially viable combustion technology, having been in existence only since the 1980's. Besides the fuel flexibility of the technology, one of the other chief advantages is the relatively low emissions. Due to the low combustion temperatures of a CFB, NOx generation is very low. Another advantage is that if the fuel has a high percentage of sulfur, limestone can be introduced into the bed to control the sulfur dioxide emissions. Fluid bed technology remains the highest capital cost technology compared to all other technologies.
ESI hopes this brief introduction into the various technologies available for steam and power generation has been informative. The next article in the series will outline specific differences in the various technologies including emissions limit capabilities, fuel limitations, and more. If you are currently evaluating your options for solid fuel firing, give the experts at ESI a call to discuss your options. Contact Jay Garrett at 770-427-6200 or info@esitenn.com.
ESI ( http://www.esitenn.com/ ) provides rental boilers, biomass conversion and bulk material handling systems consulting and implementation for steam power generation system applications
How Can Instructional Technology Make Teaching and Learning More Effective in the Schools?
In the past few years of research on instructional technology has resulted in a clearer vision of how technology can affect teaching and learning. Today, almost every school in the United States of America uses technology as a part of teaching and learning and with each state having its own customized technology program. In most of those schools, teachers use the technology through integrated activities that are a part of their daily school curriculum. For instance, instructional technology creates an active environment in which students not only inquire, but also define problems of interest to them. Such an activity would integrate the subjects of technology, social studies, math, science, and language arts with the opportunity to create student-centered activity. Most educational technology experts agree, however, that technology should be integrated, not as a separate subject or as a once-in-a-while project, but as a tool to promote and extend student learning on a daily basis.
Today, classroom teachers may lack personal experience with technology and present an additional challenge. In order to incorporate technology-based activities and projects into their curriculum, those teachers first must find the time to learn to use the tools and understand the terminology necessary for participation in projects or activities. They must have the ability to employ technology to improve student learning as well as to further personal professional development.
Instructional technology empowers students by improving skills and concepts through multiple representations and enhanced visualization. Its benefits include increased accuracy and speed in data collection and graphing, real-time visualization, the ability to collect and analyze large volumes of data and collaboration of data collection and interpretation, and more varied presentation of results. Technology also engages students in higher-order thinking, builds strong problem-solving skills, and develops deep understanding of concepts and procedures when used appropriately.
Technology should play a critical role in academic content standards and their successful implementation. Expectations reflecting the appropriate use of technology should be woven into the standards, benchmarks and grade-level indicators. For example, the standards should include expectations for students to compute fluently using paper and pencil, technology-supported and mental methods and to use graphing calculators or computers to graph and analyze mathematical relationships. These expectations should be intended to support a curriculum rich in the use of technology rather than limit the use of technology to specific skills or grade levels. Technology makes subjects accessible to all students, including those with special needs. Options for assisting students to maximize their strengths and progress in a standards-based curriculum are expanded through the use of technology-based support and interventions. For example, specialized technologies enhance opportunities for students with physical challenges to develop and demonstrate mathematics concepts and skills. Technology influences how we work, how we play and how we live our lives. The influence technology in the classroom should have on math and science teachers' efforts to provide every student with "the opportunity and resources to develop the language skills they need to pursue life's goals and to participate fully as informed, productive members of society," cannot be overestimated.
Technology provides teachers with the instructional technology tools they need to operate more efficiently and to be more responsive to the individual needs of their students. Selecting appropriate technology tools give teachers an opportunity to build students' conceptual knowledge and connect their learning to problem found in the world. The technology tools such as Inspiration® technology, Starry Night, A WebQuest and Portaportal allow students to employ a variety of strategies such as inquiry, problem-solving, creative thinking, visual imagery, critical thinking, and hands-on activity.
Benefits of the use of these technology tools include increased accuracy and speed in data collection and graphing, real-time visualization, interactive modeling of invisible science processes and structures, the ability to collect and analyze large volumes of data, collaboration for data collection and interpretation, and more varied presentations of results.
Technology integration strategies for content instructions. Beginning in kindergarten and extending through grade 12, various technologies can be made a part of everyday teaching and learning, where, for example, the use of meter sticks, hand lenses, temperature probes and computers becomes a seamless part of what teachers and students are learning and doing. Contents teachers should use technology in ways that enable students to conduct inquiries and engage in collaborative activities. In traditional or teacher-centered approaches, computer technology is used more for drill, practice and mastery of basic skills.
The instructional strategies employed in such classrooms are teacher centered because of the way they supplement teacher-controlled activities and because the software used to provide the drill and practice is teacher selected and teacher assigned. The relevancy of technology in the lives of young learners and the capacity of technology to enhance teachers' efficiency are helping to raise students' achievement in new and exciting ways.
As students move through grade levels, they can engage in increasingly sophisticated hands-on, inquiry-based, personally relevant activities where they investigate, research, measure, compile and analyze information to reach conclusions, solve problems, make predictions and/or seek alternatives. They can explain how science often advances with the introduction of new technologies and how solving technological problems often results in new scientific knowledge. They should describe how new technologies often extend the current levels of scientific understanding and introduce new areas of research. They should explain why basic concepts and principles of science and technology should be a part of active debate about the economics, policies, politics and ethics of various science-related and technology-related challenges.
Students need grade-level appropriate classroom experiences, enabling them to learn and to be able to do science in an active, inquiry-based fashion where technological tools, resources, methods and processes are readily available and extensively used. As students integrate technology into learning about and doing science, emphasis should be placed on how to think through problems and projects, not just what to think.
Technological tools and resources may range from hand lenses and pendulums, to electronic balances and up-to-date online computers (with software), to methods and processes for planning and doing a project. Students can learn by observing, designing, communicating, calculating, researching, building, testing, assessing risks and benefits, and modifying structures, devices and processes - while applying their developing knowledge of science and technology.
Most students in the schools, at all age levels, might have some expertise in the use of technology, however K-12 they should recognize that science and technology are interconnected and that using technology involves assessment of the benefits, risks and costs. Students should build scientific and technological knowledge, as well as the skill required to design and construct devices. In addition, they should develop the processes to solve problems and understand that problems may be solved in several ways.
Rapid developments in the design and uses of technology, particularly in electronic tools, will change how students learn. For example, graphing calculators and computer-based tools provide powerful mechanisms for communicating, applying, and learning mathematics in the workplace, in everyday tasks, and in school mathematics. Technology, such as calculators and computers, help students learn mathematics and support effective mathematics teaching. Rather than replacing the learning of basic concepts and skills, technology can connect skills and procedures to deeper mathematical understanding. For example, geometry software allows experimentation with families of geometric objects, and graphing utilities facilitate learning about the characteristics of classes of functions.
Learning and applying mathematics requires students to become adept in using a variety of techniques and tools for computing, measuring, analyzing data and solving problems. Computers, calculators, physical models, and measuring devices are examples of the wide variety of technologies, or tools, used to teach, learn, and do mathematics. These tools complement, rather than replace, more traditional ways of doing mathematics, such as using symbols and hand-drawn diagrams.
Technology, used appropriately, helps students learn mathematics. Electronic tools, such as spreadsheets and dynamic geometry software, extend the range of problems and develop understanding of key mathematical relationships. A strong foundation in number and operation concepts and skills is required to use calculators effectively as a tool for solving problems involving computations. Appropriate uses of those and other technologies in the mathematics classroom enhance learning, support effective instruction, and impact the levels of emphasis and ways certain mathematics concepts and skills are learned. For instance, graphing calculators allow students to quickly and easily produce multiple graphs for a set of data, determine appropriate ways to display and interpret the data, and test conjectures about the impact of changes in the data.
Technology is a tool for learning and doing mathematics rather than an end in itself. As with any instructional tool or aid, it is only effective when used well. Teachers must make critical decisions about when and how to use technology to focus instruction on learning mathematics.
Patents: A Tool for Technological Intelligence
Patents are the largest source of technological information. Patent are given to the inventor as a reward for its innovation in the form of the exclusive right of the monopoly for a period of 20 years from the priority date of the invention. Due to advancement in the IT sector and internet, now these valuable documents are in the reach of the general public. Any person skilled in the art can go through various patent databases and after a search can get the patent document of their need. There are different patent databases viz, USPTO, EPO, JPO, etc freely open for the public access. If we go through the patents related to a specific technological area, we will be able to find the lots of information about the life cycle of the technological innovation viz.,
o evolutionary path of a specific technology,
o technological development,
o technological diversification,
o technology merges,
o major players in specific technological area,
o key points of the specific technology,
"The World Intellectual Property Organisation (WIPO) revealed that 90% to 95% of all the world's inventions can be found in patented documents."
Patent analysis can reveals very valuable informations, which is not available anywhere. After patent search the crucial part is the patent analysis, and one have to be very concise about their objective of the study. The information in the patent documents can be utilized in different form according to the need and mapped accordingly to get the picture of the entire analysis in snapshots.
Patent data can be used for the preparation of technological landscapes. Logistic mathematics and circle mathematics can be very useful in the plotting of the technological landscape. It can reveal the evolutionary trend of a technology, how it is evolved from a basic technology, along with the period of the technological diversification and its nature. These maps will also give the detailed overview of the merging of the different technologies to give rise to break-through technologies. These types of maps will be very useful for the R&D personals to evaluate the position of their research and technology, and also they will find way to more innovate more advanced and valuable technology.
In the today's global context firms need to know what technologies can competitors choke easily, and may be attempting to. They also need to know the spaces in technologies where competition is intense, and the areas where competitors are concentrating their IP development and their R&D efforts. They need to be able to track patent acquisition and development strategies and chart out the competitive landscape. To evaluate technology before making any investment decision, firms need to know the pace of patenting activity in the technology, which patents embody fundamental ideas in the technology and how vulnerable the firm's technologies are to patent infringements. This will give them much needed information in deciding between technology development and technology acquisition.
The ability to extract relevant information from patent literature is a crucial success factor for anyone involved in technological innovation. The technology mapping technique's that can be used to transform patent information into knowledge that can influence decision-making.
Patents are an important source of technological intelligence that companies can use to gain strategic advantage. Technology Intelligence is a can be used for gathering, analyzing, forecasting, and managing external technology related information, including patent information. Computational patent mapping is a methodology for the development and application of a technology knowledgebase for technology and competitive intelligence. The primary deliverables of patent mapping is in the form of knowledge visualization through landscape and maps. These maps provide valuable intelligence on technology evolution/revolution, nature of various types of pioneering; big; pure; and emerging players, state-of-the-art assessment, etc.
These types of technological maps will prove to be a valuable multiplier in R&D and commercialization activities, in various ways including the following:
o Developing further insights in response to strategic requirements and policy formulation in the organization
o Forecasting and identifying technological activities and trends in the industry
o Aiding in the visualization of alternative development and growth paths available to the organization
o Enabling pre-emptive recognition and action on potential licensing opportunities
o Identifying prospective partners and clients
o Identify technology discontinuities and areas of opportunities in their chosen technologies
o Monitor and evaluate the technological process of competitors and potential competitors
o Support decisions on foray and investment into particular technologies and sub-technologies
o Surveillance of technological progress of competitors as well as to alert oneself to new entrants to the field
o Spotting of white spaces or opportunity areas within a dense technological domain
o Creative tool to simulate new ideas and create new IP
o Complementing corporate IP filing strategies
o Support technology proposals for large scale national and international level projects
o Support investment and technology due diligence on companies
Patent mapping can be an integral part of IP management. It can uncover valuable information hidden in patents and can provide useful indicators for technical trends, market trends, competitors changes and technological profile and innovation potential of a company. Patent maps are visual representations of patent information that has been mined and aggregated or clustered to highlight specific features. There is a high degree of flexibility in visualization, which may be in the form of time-series or as spatial maps. We provide a more market and technology oriented analysis of the complete set of patent portfolio assets via our patent mapping services. Patent mapping can be used to ascertain the quality of patents with respect to prevailing technology and the extent to which patents affect the technology. This is a valuable input in technology sourcing/development and R&D decisions. Patent mapping can be indispensable for both firms that have an under-utilized patent profile and are looking to license/assign it at the most favorable terms, as well as to firms that are looking at developing patent portfolio strength in a particular technological field.
Mere subject specialization is not enough for this, but analytical thinking and innovations are very essential. Today lots of software resources are available for mapping the patent data, but almost all are confined to bibliographic informations. The machine work cannot be compared with that of human intelligence. Patent mapping requires many skills. First and foremost among these is an ability to understand the complex scientific ideas protected by the patents themselves. Although it is possible to create a patent map by analyzing the relationships between patents without understanding the subject matter, such a map is often useless and needs to be refined by someone who understands the intricacies of the particular scientific discipline that is the basis of the invention. Thus, I expect that the need for people with scientific (and engineering) expertise in the field of patent mapping is on the increase. That's why today lots of KPO firm are looking for the right individual and there is a huge demand today, which will certainly increase in the near future.
Vinod Kumar Singh
Knowledge Scientist
Email-vinod.patent@gmail.com
Mobile-91+9393000913