December 18, 2009

4 - pack ABS - The ones your car has!!

Now that from a few posts we have discussed only "C" I thought this time will get back to some automotive basics.
Topic this time is Vehicle stability and we will towards the end talk a bit about ABS.
The vehicle undergoes various kinds of forces when it is in motion. We will talk about stability when in motion in this post. The simplest to understand is horizontal forces. i.e when the car moves forward then there is Pseudo-Force that acts backwards ( which makes the driver go backwards when the car moves forward). This force acts in the opposite direction on braking. That is the reason the front of the car will dip when we brake hard. Now assuming that you are going on a straight road when you brake the wheels will reduce their speed continuously till they come to stand still. Here there are some concepts that we need to understand.
  1. Why does the car move forward when the wheels turn? - The answer to this lies in friction. If there was no friction, there was no reason for the car to move forward. So who is really driving the car ahead....well it is the friction force. If the wheel moves too fast then it tends to slip. This is because the surface friction cannot keep up. This means when you have low friction, the best thing to do is to let your wheels spin slowly rather than fast. Why Slow?  The force acting at the point of contact is proportional to the wheel speed. So footing the accelerator always doesn't mean more speed.
  2. What happens when we brake ? - The brakes when engaged bring down the speed of the wheel. This means we are reducing the force at the point of contact. In which the surface will offer us lesser friction force. This means that our vehicle will slow down. However, when will we start slipping? This will occur if the rest of the body of the car is still much faster ....and that drags our wheels. This situation worsens when the brakes are jammed hard and the wheels lock. After the lock the friction equation changes. From Rolling friction it changes to sliding friction. This sliding motion will reduce/ remove the steer-ability. Once the steering capability is lost all that will happen is that the vehicle will move in the direction of the force it had when the tires are locked. 
  3. Other Problems ? - The brakes work quite great when we are on a straight road. However, when we brake when we are on a curve we have to worry about other forms of forces apart from horizontal. They are the lateral forces that tend to push the car in other directions than the one intended by the driver. This however, is not controlled by having a ABS.
Yeah, finally we are talking about the ABS. The ABS or the Anti Lock Braking systems will help a lot in number (2) above. It will ensure that the wheels never are locked. This helps in the following ways
  • Wheels are not locked so you can steer the vehicle till the last minute.
  • Wheels are braked and released at very fast rate which means that the wheels are approximately maintained at a point where they get maximum braking efficiency ( approx 20 -30 % slip). This happens because the wheels have more friction when they are rolling then when they are slipping. However, this makes you wonder if slip has lesser resistance then why not make my vehicle slip all the while. That would have been a good idea if the area's of contact while slipping and rolling where the same. Since they are not in effect under normal circumstances the slipping tyre would give you a lot more resistance. Also we got to think about the life of the tyre. However, when we are braking this works better to have the tyre rolling rather than slipping.
  • If you have 4 way ABS system ( lucky you) then each wheel can be individually controlled to ensure best braking force through the 4 wheels so that the vehicle can be stopped easily even on µ-Split surfaces too. Btw, 4 way ABS means that each tyre has a wheelspeed sensor and has a control valve that can be controlled to lock or unlock the brakes. 3 way ones have front 2 wheels individual and rear wheel both have a single control valve.
Ok i will stop here to take  a breather. More later.
Please leave your comment. You can subscribe to this blog by using the links under "Subscribe" section.


December 5, 2009

Preprocessor -Coding myth 4

There is this classic case which came as a surprise to me when a fairly senior member of my team started arguing with me about a particular implementation that had made....below is the case
#define BusClock ((Crystal_Frequency)/(Prescalar))

Now the argument was ...why was i not calculating manually and typing in the values.
Our Dude Says >> See Macros will be just replaced by the compiler so when you load it into the register like this

BCLK = BusClock;

then there will be more run time due to the division of crystal freq by prescalar.
Well this is totally wrong. There are parts of the statement which are correct and which are wrong.
Correct parts >> Macro's will be dumbly replaced ( not by the compiler but by the preprocessor).
Which means that when my code goes to the compiler it looks like this
BCLK = 600000/12;
where cystal is 6Mhz and Prescalar is 12.
Does that mean it will go to the assembler in the same way....No!! and that is the wrong part.
As a part of basic optimization the compiler will be clever enough to do this division of 600000/12 and feed the appropriate value into BCLK. which means actually it looks like
BCLK = 50000;
before anything meaningful really begins.
In most compilers this cannot be turned off and is a basic optimisation. Infact, I have known a compiler that had a problem when we gave too big numbers. It would just give some gibberish error.

So the basic rule is that ...
If the compiler has all the information that it needs to create a single number ( after as many operations ...like 1+(10000)*12/(53) etc...) it will do it on its own to give a nearest number. Note that these divisions and multiplications are Integer unless you explicitly tell the compiler that they are double. So based on how you choose the values you might end up with varying results.

If anyone knows of a compiler that doesn't do that, please enlighten me...
Please leave your comment. You can subscribe to this blog by using the links under "Subscribe" section.


Powered by ScribeFire.

November 9, 2009

Eeprom and Flash Emulated Eeprom

This post talks about some basics of EEPROM and FEE.
To start with the discussion it is necessary to know what EEPROM stands for. EEPROM stands for Electrically Erasable PROM. Where PROM means Programmable Read only memory. The very fact that we are highlighting the Electrically Erasable means that there are other methods to erase the PROM. I leave it to the reader to find out what these are and leave as a comment.
Why do we need it?
In any system there are some set of parameters that need to be remembered through power cycles and can/may change between power cycles. An example....Well Your monitor settings ( i.e. the brightness contrast etc that you do on your monitor) is remembered through power cycles even if you connect the monitor to another PC or whatever. One of the places to store this could be the EEPROM.
What does this mean?
To write to the EEPROM i should be able to erase it and then rewrite it. Also it should be capable of being written several times ( maybe a few lakh times). Also, it should have data retention capacity. i.e if i write it today and check after say some 10 years the data should be same in the absence of any power.
What is FEE then?
As you can see some of the properties of EEPROM are held by the flash memory also. Like i can write into the Flash memory electrically, it can retain information across power cycles, it can be written multiple times. However, the biggest different is that the flash technology ensures that it can be written only in huge chunks like 64 or 128 bytes. Also, the life of the Flash memory is much lesser than the EEPROM. On the other hand the best part is that it costs much less than the EEPROM.
So FEE which stands for Flash Emulated EEPROM is basically a method of emulating the behaviour of the EEPROM using the Flash memory. This means if a Flash memory has 10000 write cyles, i should make it some how work as if it is 100,000 write cycles ( like in case of EEPROM). Also, i should provide somehow the capability of writing a single word/ single byte as available in the EEPROM.
In short if i have FEE then the application should feel as if it is having EEPROM....
How to do this ? Well that is for another post....


Please leave your comment. You can subscribe to this blog by using the links under "Subscribe" section.


Powered by ScribeFire.

October 17, 2009

Sign language

I have been assigned the painful job of taking interview for my team in company. Why is it painful? Because i am the junior interviewer in the panel and I have to put through the bullshit that the senior interviewer gives to the candidate.... Anyway, this post is just about a couple of things i found people where not aware of !!

Signed arithmetic in C?
First thing we should know that most C compilers and µC's use 2's complement method to do signed arithmetic. If you are wondering hey that is the only way....Wait!! Another way of representation is by using the first bit for sign and rest of the data.
Which means
-5 = 0x1101
-7 = 0x1111
This representation is good and simple. However, it was not so good for the chip manufacturers who had a problem with zero.
0 = 0x0000 or 0 = 0x1000. Which meant that on the number line the same number could be represented by two symbols.
This problem was solved by introduction of the 2's complement. Which is nothing but 1's complement + 1.
which means
-5 = 0x1011
-7 = 0x1001 and 0 = 0x0000.
There was no concept of negative zero. This effectively ensured that for n bits one could represent 2^(n) values. So for 4 bits we should have 16 values. 8+tive and 8 negative. However, we have 8 negative -1 to -8 (0x1000). And 7 positive (1-7). Zero is actually neither positive nor negative and takes the middle position (without a sign bit so really it is a positive...what say?).
One question that i sometimes ask is why was 0x1000 not chosen as +8? Technically it is correct!!. There are two problems. 1) we would kill a advantage which we have with 2's complement ( i.e. MSB is 1 if the number is less than zero). 2) I forgot!!...

Anyway, It is necessary that we understand how signed arithmetic works to understand concepts like add through overflow ( used in Targetlink), Usage (or avoiding!!) of saturation blocks ( in Targetlink and RTW). You can do intelligent coding if you understand how the signed numbers are understood by the compiler you are using.

Please leave your comment. You can subscribe to this blog by using the links under "Subscribe" section.


Powered by ScribeFire.

September 18, 2009

The Early Fuel Pulse

This post again goes back to the EMS basics that i have been writing about in the previous posts.
Early systems when started to use the Gasoline Fuel injection ( Manifold injection) only controlled the Fuel pulse. Fuel pulse is the industry term that has evolved for the duration for which the Fuel Injector electrical signal is enabled. How does the Fuel Pulse help us to control the fuel?
The fuel line is maintained at a specific pressure. This pressure is usually created by a pump either in Fuel tank or outside of it. The Fuel is maintained at a fairly high pressure. The Injectors are solenoid based valves. A electric signal running through the solenoid opens the valve. The valve closed back with the help of a spring. This spring ensures that once the fuel pulse is cut off the valve closes immediately. However, this is not true and there is adjustment added in the EMS software to take care of this variation.
As in the earlier discussions the Fuel pulse directly depends on the engine load and the engine rpm. Now i explain in some bulleted points why some sensors are needed and how they contribute to calculation of the fuel pulse.
Intake Air Temperature Sensor : The density of air depends temperature hence the temperature sensor is used to indirectly calculate the density of air. Warm air is less dense while cold air is more dense.
Air Flow Sensor: In systems which use the Air-Flow to determine the intake air mass a Air Flow Sensor is used. This is basically done using a simple logic. 1) Calculate the volume of air that has passed the flow sensor. 2) Use a Lookup table based approach to get the density of the air based on the temperature 3) use the density and volume/sec to get the mass air flow into the manifold.
Manifold Absolute Pressure : Not all systems use the Air Flow Sensor, some ( like the Bosch D-Jetronic) use the pressure in the Manifold to calculate the Mass air flow into the manifold. This system has a drawback because as the engine revs up the pressure fluctuations inside the manifold are immense ( Because at the other end of the manifold we have the engine which is acting like a pump!!). Using the Gas equation it is easy to derive the Volume of the air using the Pressure and the temperature information.

Just a small deviation here.....Air Flow Sensors come in two type....1) Vane Type 2) Heating element type.
  •  The vane type is pretty straight forward....you put a piece of thin metal in the path of the air flow....faster the flow more the vane will be pushed. The other end of the vane is connected to a potentiometer which will show variation of resistance to the ECU.
  • The Heating element uses a small coil and maintains it at a particular temperature using a small current. The air flowing over this coil cools it down changing its resistance and thus varying the current.
The RPM of the engine is detected easily by a magnetic pick up and a gear. Note that early systems did not control the Ignition Timing. It was fully controlled using the Camshafts and a Distributor. However, LE- Jetronic systems from Bosch implemented Electronic spark distribution, However even in these systems there was very little control on the Ignition timing which still heavily depended on the Camshaft.

Corrections and Compensations
Though the base Fuel pulse width was calculated using Lookup-Table ( i.e FuelPulse = f(rpm,engineload)) there are some special diets for the engine based on some special conditions.
  1. Startup : The mixture leaning effect is seen in a cold engine. What does this mean? When the engine is cold ( manifold and other portions also) then the mixture formation is not at its best...Which means that all the fuel that is injected is not going to burn. Hence to achieve the same behaviour more fuel needs to be added. Conclusion : Increase in PW
  2. Low Battery: This means that the battery voltage has dropped below a specific value. This directly translates to insufficient current supplied to Injectors. Since there is lesser current on the injector solenoids the do not open so well hence we need to elongate the pulse to ensure that the same quantity of fuel is delivered as we intended to. Conclusion : Increase in PW
  3. Idling : The engine needs a certain RPM to be maintained to that in can overcome its self resistance and continue to function. However, if the throttle position supplied by the driver is taken into consideration....there is none...i.e the driver doesn't push accelerator to keep the engine idling. Hence the EMS should understand this condition and automatically provide a finite amount fuel.  Usually a special switch is incorporated in the Throttle pedal assembly which detects a no throttle press condition.
  4. Acceleration Enrichment: This sudden demand for power occurs when the driver floors the throttle. This sudden flooring of the throttle can cause the mixture to lean out instantly making the engine under go a "lean stumble". This is avoided by providing a throttle floor switch which is activated when the throttle is floored and indicates the ECU about a sudden power demand. Conclusion : Increase in PW
In my current project there have been fierce discussion about which one is a correction and which one is a compensation....I say does it matter ? If you understand how we can classify these please let me know too !!
[Note: The above is just the beginning and talks only about early Manifold injection system, Next post will contain how other signals were added to this base system to improve the performance!!]





Powered by ScribeFire.

September 7, 2009

The torture that a Suspension undergoes!!


The pic should tell you why we need a good suspension in trucks. The pic is from a test track however the roads in India keep in store worst conditions for the truck. The suspension design for heavy duty trucks is a extremely challenging task. There are some very strict rules that need to be followed to ensure that the weight of the suspension is low ( so that in effect the Truck's unladen weight is less) and also provide the right amount of stiffness to ensure good drive-ability. The design should consider worst case scenario's to ensure that the truck suspension holds even under heavy load conditions. Usually a factor of safety of 2.5 to 3 is considered in designs.



Powered by ScribeFire.

September 4, 2009

Glossary of EMS terms -part 1

This article just introduces a few terms that are very frequently used when we talk about Powertrain. I would say this is just some kind of a glossary of terms

Lean & Rich Mixtures: Both terms talk about air-fuel ratio. A Lean mixture is one in which the air (oxygen) is more than what is needed to burn the fuel. Obviously Rich mixture is the other way round ( though opposite of lean is fat!!). The stoichiometric ratio or the right ratio is about 14.7:1 ( for pure octane). Which means, to burn 1 part of fuel we would need about fifteen parts of air. Note, that when i talk of air it is actually the oxygen that i am really interested. One very big misconception is that all 14.7:1 is the universal ratio which is false because this depends on the fuel ( its octane number + if there are any additives like anti-knock).

Lambda : For a long time i could not understand this value and people told me all kinds of things. for starters "λ" is not the air-fuel ratio but it is a measure of the air-fuel ratio. The best of understanding λ is that it is "Excess air factor". So if λ is 1 that means there is no extra air and we are running at stoich ratio. If λ is greater than 1 that means we are running Lean because we have extra air in the mixture. Similarly, λ <1 means that our mixture is rich, there is less air than needed to burn the entire volume of fuel that we have.

Lambda Sensor: This is also called O2 sensor. Again this was a very confusing term for me ( still is!!). Why do we call this λ sensor? what does it measure? &Lamda sensor is actually a 0xygen sensor. This give out a voltage output based on the oxygen content in the stream. There is a platinum probe one side of which is exposed to exhaust gases while the other side is exposed to atmospheric gases. This works pretty much on the same principle as a electrolytic cell. The voltage output is used by the ECU to do other calculations about which we shall talk in future. In modern vehicle i would expect atleast 2 such sensors.

58x Signal: This is basically the term that has evolved for the cranktooth signal. For ECU controlled engines, with individual cylinder control it is necessary to know when to fire which cylinder and for this it should know which cylinder is at TDC and which at BDC. This information is available to the ECU via a toothed wheel which is connect to the crankrod of the engine. For a lot of reasons, including ease of software computations, 60 tooth where chose on the wheel with 2 teeth missing. The missing teeth helps the ECU recognize where the cylinders are. Some manufacturers are more comfortable using the 28x signal. i.e 30 teeth with 2 teeth missing.

Please leave your comment. You can subscribe to this blog by using the links under "Subscribe" section.


Powered by ScribeFire.

August 20, 2009

Throughput funda's

I have been quite inconsistent in updating the information presented on this blog. The one things that discourages me like any other new blogger is that very little traffic is seen here and since this is a technical blog, unless i get feedback it is difficult for me to judge if something is good or bad. However, for now i have considered that no feedback is negative feedback. Nevertheless i continue to write.
What is Throughput...This is the weeks question.
Wordweb tells me
Output relative to input; the amount passing through a system from input to output (especially of a computer program over a period of time)

The Throughput that i am talking is the one that embedded engineers often talk about. In simple terms this value is a measure of the CPU load under the worst conditions. What could be the worst conditions? These are basically situations which demand more processing power. For example, Image Stabilization, Red Eye Detection, Smile Detection, Ambient light detection along with Click detection or auto timer ( and image post processing) in a Camera perhaps put a great deal of performance demands on the CPU in a Digital camera and this also determines it worst case performance.
Unfortunately like many other engineering terms a higher value of throughput means that your system is more loaded. Often under some conditions the throughput reaches 100% in systems which means the CPU has almost no idle time and is being utilized all the time to its maximum capacity.
If i look at it from an EMS perspective then startup is the when there is very high CPU loads that are encountered. In an OS based system there are methods to determine for how much time the CPU is free. Usually, some background tasks which are not necessary to be done all the time are done in this free time. ( Example, CRC calculation or RAM Checks etc). In simple scheduler based systems the throughput is just the time remaining in the baseloop after all the tasks have been finished. One more parameter that influences the throughput to a great extent is the Interrupt Rate. A very high interrupt rate will ensure that the CPU loses a lot of time in context switches which are really an overhead and do not contribute in anyway to the functionality of the system.
[ This was my understanding of Throughput. After looking into wiki i think the origin of is term comes from the CPU bandwidth utilization. The term is primarily used for parametrization of Communication channel bandwidths]

How Do we ensure that we never reach 100% throughput?
  • Plan your interrupts and their sources well. You should know the worst case rate of an interrupt beforehand. Example, the Door lock engage interrupt cannot come at say more than 1-2 times a second ( Due to the inertia of the lock).
  • Write your code with throughput in mind. E.g if possible use binary search in place of linear search. Use macros instead of functions judiciously
  • Overuns should be detected in software using some special variables or debug variables. Over-Runs occurs when the throughput goes more than 100%. Which means that you have asked the CPU to more than it is capable of handling in the given time. 
This is very brief and windowed perspective of Throughput. If you have some other inputs as to how this is related to your domain then do put a few lines of comments.
 Please leave your comment. You can subscribe to this blog by using the links under "Subscribe" section.












Powered by ScribeFire.

July 25, 2009

The IDE and The Compiler

This is a very basic article. I would say for newbie's or for those who are misguided. Also, I was asked this question by some one recently. The question or rather statement "Tourbo C is a nice compiler for it has a very nice gui. Gcc gui is not good."

Well the problem here is that we are getting confused with the compiler and the IDE (Integrated Development Environment). The compiler is usually a parser ( Text paser for simplicity) that runs through a text file ( which we typically name with extensions like .c, .h etc) and feed it to this parser. The job of the parser ( or compiler hence forth) is to parse the file and output another file based on certain rules which should be adhered to by the writer of the text ( or code henceforth ). The other file that is usually output is object file. Note that as intermediate step the compiler generates assembly code which usally not output ( unless you tell the compiler to show you this specifically by a commandline option -S for gcc) . The object code which can be linked to form the executable. COFF & ELF are common object file formats, however compiler writers are free to choose what the want.

Now coming to the IDE. IDE has really got nothing to do with your compiler. The key words are explained here
Integrated: Usually every compiler vendor comes up with his/her own Editor to help the coder to write his code. Why a specific editor? Typically, each compiler provides it own set of special key words etc which the editor can highlight is specific colors and other features like codesense are provided for the ease of the coder.
Development : Yeah! we all write code to do something or develop something
Environment : This is the key term. The IDE provides an environment for the coder where he doesn't need to know the intricacies of compiler and more low level details of how to use it. You can can just take the IDE and start writing your code without bothering about what are the command line arguments that need to be passed to compiler. Also, you need not worry about feeding the include paths etc ( not all IDE's have this feature).

Now i try to defend gcc. I have used gcc with two ( or three??) IDE's and it works great. The simple ones are dev-ccp & code::Blocks. The difficult ones are actually extinct now....codewright.  Note that the best part about gcc is that it is a single compiler that will allow you to compiler with a whole lot of customizations through the command line arguments. I remember a project where i was using gcc only for creating dependency files which used to get fed to a embedded compiler which unfortunately did not have the capability to make dependency files ( Folks at my previous firm will know about this!!).
Why we need dependency file will be explained in another post soon.  Bis dan.....
 Please leave your comment. You can subscribe to this blog by using the links under "Subscribe" section.




Powered by ScribeFire.

July 19, 2009

Engine basics - Knocking

This post talks a bit about a concept called "Knocking" which is very common in Spark Ignition Engines having very high compression ratio.
The question we need to answer first is what is Knocking?
"Knocking" is a metallic pinging sound that is caused inside the engine cylinder & leads to high intensity vibrations in the Engine block ( usually causing metal fatigue in the cylinder walls). If the engine knocks for very long duration then this might have serious impact on the usability of the engine and on the engine life.
What causes "Knocking"?
Flame front:
The power inside the cylinder which is generated when a spark ignites the air-fuel mixture leading to a flame front. The flame front is a high velocity pressure wave that travels, after originating near the spark plug, downwards to push the piston head and ensure that our engine keeps running.
Now, what happens is that based on the fuel quality among many other factors there are certain spots inside the cylinder with some carbon deposits.If the cylinder temperature keeps rising due to ignitions then at some point of time these hot spots reach a temperature where they are capable to ignite the air fuel mixture or the yet unburnt gases (also called End gas). Simply put there is sufficient heat accumulation at certain points that these act as potential spark plugs.
The Problem:
This causes a problem because we do not have any control over these spontaneously created spark plugs ( so to say). This means that they can ignite the end gas at any point of time. In homogeneous operation of engine ( more about this mode in future posts), the mixture is fairly rich and these hot spots have energy sufficient to create a ignition of the air-fuel mixture.  This phenomenon  is often known as "Detonation".This ignition causes a pressure wave to travel from the spot towards the periphery. This Flame-Front if collides with the flame front travelling from the spark plug will result  in wo very high pressure waves to bang into each other. This leads to very high pressure peaks which put extreme amounts of stress on the cylinder walls. Continuous Knocking might lead to permanent stresses getting formed in the cylinder walls & the piston head which eventually will lead to their failure.
How to Avoid it ?
A few years back when "leaded Petrol" used to rule the market the folks added something known as Anti-Knock in the petrol to ensure that it did not knock. This however, lead to higher amount of particulate matter in the exhaust and also was not good for the cylinder walls.The anti knock was a Lead compound which i cannot remember at this point of time.
These days we use more of "Unleaded Petrol" and there has to be  different mechanism to control the "Knocking". This is done these days by ensuring that the engine temperature doesn't reach very high ( then there is lesser chance of hot spots getting created). One of the methods employed to do this is EGR. Exhaust Gas Recirculation, ensures that he temperature of the engine come down apart from having benefits like enhanced fuel efficiency and reduced NOx in the exhaust. However, that is a different topic and will be dealt a little later. The other method used in conjunction with the EGR is spark retard. Spark retard will ensure that you ignite the air-fuel mixture late enough that there will be no pressure peaks. When we do not have pressure peaks the chances of knocking are lesser.
Finally, use of high octane fuel will result in better combustion of the fuel and lesser hot spot contribution because of poor fuel quality. The reduction of hot-spots will result in reduced Knocking even at higher engine temperatures.
Some links that give more info on how it is detected etc
[1]

Please leave your comment. You can subscribe to this blog by using the links under "Subscribe" section.



Powered by ScribeFire.



July 17, 2009

No Posts for some time

I have been busy with my Targetlink & Simulink Modelling activity & also a bit held up with the Income Tax Filing process ( Though this time it is quite simple & i love it !!). There have been some sawtooth waveforms formed by the BSE Sensex index of late & this is giving me some sleepless nights....Hope to put in some interesting stuff soon....






Powered by ScribeFire.

July 3, 2009

Const Keyword - Coding Myth 3

This post talks about the "C" keyword "const". It is very often thought that making a variable "const" ensures that it gets placed in the non-volatile memory (ROM or flash). But how true is this?

Well to start with w.r.t the ansi "C" compiler, the "const" keyword just tells the compiler that the user doesn't intend to modify the variable.
E.g.
           const unsigned int Gctrl_NoOfGears= 10;
so if i try to do this
           Gctrl_NoOfGears +=1;
I can either except a warning or an error. something like this
error: assignment of read-only variable `Gctrl_NoOfGears'
However, does this warrant me that the variable is now placed inside the memory in an non-volatile area.
Well, the answer is it is compiler (settings) dependent.
Why?
Most embedded compilers are very smart and automatically place the constants in the ROM/ flash area. However, some times the flash or ROM is used only as program memory and NOT as data memory. In which case there is no way (directly) that the compiler can place this in the program memory. Also In these processors the program memory is not directly accesible because the program & data are stored in differrent memory location & have different busses to access.  In such processors writing
const char DisplayStr[] = "Welcome";
will not cause DisplayStr to go into the ROM area.
You will have to do this via some compiler pragma's like
#pragma section start ROM
const char DisplayStr[] = "Welcome";
#pragma section end ROM
to make the compiler understand that it has to place the value in ROM & not in the RAM area.
Do I have to worry?
As a novice or some one programming for a simple system NO. However, when you are running low on resources and for various other reasons it makes sense to have a look at what the compilers is doing when you say "const". I repeat MOST compilers are clever enough to put the data into the ROM automatically.
The best practice is always to have a glance into the map file to ensure that you have everything at the right places.
Also, in one of the future posts i will be talking about another usages of const and how some constants are present even without your knowledge ( also little talk about const volatile...the common question in any interview on "C").

Please leave your comment. You can subscribe to this blog by using the links under "Subscribe" section.

Previous coding myths here {1} ,{2}




Powered by ScribeFire.

June 18, 2009

Which came first 2-stroke or 4-stroke?

We all know this and it has been told to us again and again...
4-stroke engines are better than 2-stroke engines. They are more efficient, less polluting etc. Also we know that 2 stroke engines make more power ( Because they have a power stroke every crank cycle!!). However, did any of you think why was there a 2-stroke engine in the first place if 4-stroke engines are so good.
Infact i just wikied this
Invention of the two-stroke cycle is attributed to Scottish engineer Dugald Clerk who in 1881 patented his design, his engine having a separate charging cylinder.
Now also wiki says
The four-stroke engine was first patented by Eugenio Barsanti and Felice Matteucci in 1854, followed by a first prototype in 1860. It was also conceptualized by French engineer, Alphonse Beau de Rochas in 1862.
As it is clear 1854 is before 1881, why did some one build an apparently less efficient version of the engine?
I believe the answer is as follows
  • The 2-stroke engine is much lighter than its 4-stroke counterpart. It has no valves (intake or exhaust) hence no Camshafts & Cams. This also translates to lesser needs of lubrication. You can just add the lubricant to your fuel and you should be good.
  • They could be constructed in lesser space because of lesser number of components, making them a good option for lawn mowers, power saw's etc.
Due to the above factors it made the 2-stoke engines much cheaper than the 4-stroke versions. Modern 2-stroke engines have started employing Gasoline direct Injection.
The 2-stroke has come a long way and still remain favourites among hard core bikers who drool at the power produced by these machines ( Remember the Rx-100). They have their issues when it comes to thermal handling ( they get heated faster and need to be cooled more) but that can be handled by having bigger engine cooling fins ( Greater the surface area of the fins better cooling!!). So next time you ride a RX...remember that they came after the 4-stroke versions.

Please leave your comment if you have one. You can subscribe to this blog by using the links under "Subscribe" section.



Powered by ScribeFire.

June 14, 2009

Fuel Injection - Evolution Journey-II

This is a fully automotive powertrain article.
Disclaimer: I am a novice in this area. If experts read this kindly correct...

Q: What are the different fuel injection methods used?
Fuel injection systems evolved over the years from being a simple Manifold injection to DI ( Direct injection). The injection of the fuel into the Intake Manifold based on the Air Mass that is currently sucked into the cylinder is controlled either electronically or by mechanical methods. This is typically a function of the engine load. The engine load can roughly be determined by the throttle opening. The terms "Full-Load" ( i.e. Throttle valve is fully open) and "Idle" (i.e. when the Throttle valve is closed &amp; the vehicle is stationary) are very frequently used in the discussions further.
After the SPFI ( Single point fuel injection) where the fuel was sprayed into the manifold the next thing that became very popular was multi point fule injection (MPFI). Here multiple injectors were placed at suitable positions in the intake manifold to get finer control over the air-fuel mixture. Also having multiple injectors ensured that the injector closest to the cylinder going into suction stroke could be activated. In some cases this is called Port Fuel Injection (PFI). This provided also the following benefits:
  • Fuel going into each cylinder could be precicely controlled.
  • Air & fuel would get mixed just before inlet into the cylinder. This ensured that the atomised fuel did not have condensation problems. i.e. in manifold injection with single injector the air-fuel mixture when suddenly expanded in the cylinder under cold start conditions the fuel would condense & form larger particles inside the cylinder which led to higher emissions
The MPFI injection systems came in two variants, though this is not really something great !!..
  • Batch Fired : In this all the injectors for one bank of cylinders where activated in one shot. i.e. for a 4 cylinder engine, injectors for cylinder 1 & 4 were activated together similarly 2 &3.
  • Sequentially fired: Here the injectors where activated sequentially based on which cylinder was going into suction stroke.
Finally came the Direct injection. The direction injection opened up new domains of controlling the air fuel mixture because now only air would enter the cylinder and it could be the ECU that would control the fuel being injected into the cylinder. This ensured that by injecting at different points inside the cylinder & by different cylinder head profile one would achive higher spread of the atomized fuel. This lead to better flame front propagation, ensuring higher power output.
Why did we not go for Direct Injection to start with ?
The manufacturing processes in earlier days did not provide us with methods to have injectors that could bear the very high pressures generated inside the cylinder. Also, the fuel injector tip has to bear extreme temperatures which are generated inside the cylinder. Also, the pump which pumps fuel into the injector needs to generate very high pressures because if fuel is injected in the later stages of compression ( like it is done in Stratified mode of operation). The injector nozzel design also has undergone many design changes to ensure that a extremly fine spray with precise control on droplet size could be created.




Powered by ScribeFire.

June 9, 2009

Inlining code - Coding Myth 2

This post is regarding the inline keyword.
Very often we learn C & C++ together and end up mixing one language with the other. I learnt this the hard way when i found out in some debate that the "inline" keyword doesn't belong to the C language.....Boohooo!!..
Inline keyword natively belongs to C++.  It serves the purpose of just ensuring that the function is pasted inline instead of having a call to the function at every instance of the function call.
It was not a part of C. In "C" we achive similar functionality by using what are termed as "Function Like Macro's".
E.g.
#define Max(a,b)  ((a)>(b)?(a):(b))
The funciton like macros have a major disadvantage over Inline functions and that is the blindness to the compiler.
#define macro's are processed by what is known as the "C Preprocessor". The preprocessor looks for the macros and does a macro pasting operation. Which means that where ever in the above example Max is used the equivalent code is pasted.
E.g
y = Max(5,6); is equivalent to y = ((5)>(6)?(5):(6));
Then what is this blindness funda?
Well if iwrote this code
int *ptr;
y = Max(ptr,'5').
Then even this would work as for the Macro-Processor. Infact in this particular example even the compiler will not complain. However, if this was a inline function then the compiler would have been flag an error.  So it is easy to see that the inline key word has benifits over the #define macro.
Now some interesting stuff
  • - Did you know that the inline keyword is just a request to the compiler. The compiler might choose to ignore you fully and would just make the function a normal function if it feels that by making it inline it is losing out on optimization.This is in contrast with #define function like macro's which are outside of the compiler's control.
  • - Modern C compilers provide you various methods of inlining function by compiler extensions. E.g some compilers provide pragma's
#pragma InlineStart
void Inlinefunction(void)
#pragma InlineEnd
or things like
@inline void Inlinefunction(void)
  • Inlining is very useful to ensure modularity & keep your code clean. However, In "C" if this was natively available then we embedded users would not have resorted to function like macro's.
  • Evils of the keyword "inline".....Well it is difficult to debug your inline function because there is no call to the function and also it is not really visible to your debugger:-(.
Now that we have some idea of the keyword "Inline", you can try to check out the statements made above using our good old GCC compiler with "-S" option and have a look at the generated assembly code.
Please leave your comment. You can subscribe to this blog by using the links under "Subscribe" section.

Powered by ScribeFire.

June 6, 2009

Fuel Injection - Evolution Journey

This is a fully automotive powertrain article.
Disclaimer: I am a novice in this area. If experts read this kindly correct...

Q: Why & How did FI ( Fuel Injection) systems evolve?
For a long time carburetor based engines dominated the automotive world. The were good and could make your car run smoothly. However, increasing emission norms & better systems being used by the air planes eventually led to their extinction  in the four-wheeler world ( Though my XCD-135 bike still uses a carburetor for managing the air-fuel mixture). The major things that led to the death of carburetor engines are :
  • Too many compensations where needed making the design look like spaghetti code. The were compensations for idling, for cold start among other things.
  • Carburetor icing had plagued high altitude flying machines, this led to development of what we today know as EFI systems ( Electronic Fuel Injection). They started off as SPFI  (Single Point Fuel Injection) & then evolved into MPFI ( Multi Point Fuel Injection)
With the new FI systems it was necessary to have electronics to control the fuel injection. This was done by having solenoid controlled fuel injectors. These injectors could be actuated by providing what is generally called the "Fuel Pulse" and the timing & the duration (pulse width) of the pulse could control the amount of fuel sprayed into the manifold. The Advantages of  the new Fuel Injection system ( Now of course its quite old) were :
  • Better efficiency due to the fact that the fuel-air mixture could be made LEAN or RICH ( not fat for god sake !!).
  • No carburetor icing wherever it was applicable. Just in case you are wondering what is Carburetor icing, it is the basically formation of ice in the carb's venturi due to condensation of humid cold air (due to further drop in the venturi). This is often accompanied re-conversion of the fuel from its atomized format to more liquid format.
  • EFI systems will also help in reduction of NOx emissions due to which it is better complaince with emission norms is possible.  
The story is more compilicated than what has happened so far because soon a need was felt to control the ignition, Cams, valves and what not. There was also emergence of the EGR concept ( which was present even the carb engine days). The compensations & corrections that were needed to the various parameters based on vehicle parameters like Engine RPM, AC ( air conditioning), vehicle speed  ABS etc has transformed the electronic control systems into a work of art.
In the next post on this topic, I will be going into the various systems that influence a engine & how the EMS ( Engine management system) handles them.
more on....
carb icing
EFI


Please leave your comment if you have one. You can subscribe to this blog by using the links under "Subscribe" section.


Powered by ScribeFire.

June 4, 2009

When Size does matter - Coding Myth 1

Again i am slow in updating this blog and this time it is really because I was busy with some GUI building activity on Matlab. I will write more about in another post. However, today the topic is more about "C" coding myths.

Some dudes I have met write really write complicated code like the one below stating that it will be more efficient. Some how they seem to feel that compact code ( in terms of number of lines & characters used) translates directly into lesser code volume in the microcontroller. I just tried this....

Code:
unsigned char alt2(void)
{
    unsigned char var=10;
    unsigned char output;
    if(var==1)
        output=3;
    else if(var==2)
        output = 2;
    else
        output =1;
    return output;       
}
unsigned char Alt(void)
{
    unsigned char var= 10;
    unsigned char output;
    output = (var==1)?(3):((var==2)?(2):(1));
    return output;
}
void main(void)
{
    int x;
    x = alt();
    x = alt2();
}
Now on compiling this with avr-gcc with -S option you should be able to get the assembly code output also. Let us compare the functions Alt and Alt2 which have same functionalities.
alt2:
    push r29
    push r28
    rcall .
    in r28,__SP_L__
    in r29,__SP_H__
/* prologue: function */
/* frame size = 2 */
    ldi r24,lo8(10)
    std Y+2,r24
    ldd r24,Y+2
    cpi r24,lo8(1)
    brne .L2
    ldi r24,lo8(3)
    std Y+1,r24
    rjmp .L3
.L2:
    ldd r24,Y+2
    cpi r24,lo8(2)
    brne .L4
    ldi r24,lo8(2)
    std Y+1,r24
    rjmp .L3
.L4:
    ldi r24,lo8(1)
    std Y+1,r24
.L3:
    ldd r24,Y+1
/* epilogue start */
Alt2 takes a stack frame of 2 bytes and the code is readable to a great extent. I am sure it is more maintainable compared to Alt. However, Alt generates a stack frame of 4 bytes.
Alt:
    push r29
    push r28
    rcall .
    rcall .
    in r28,__SP_L__
    in r29,__SP_H__
/* prologue: function */
/* frame size = 4 */
    ldi r24,lo8(10)
    std Y+2,r24
    ldd r24,Y+2
    cpi r24,lo8(1)
    breq .L7
    ldd r24,Y+2
    cpi r24,lo8(2)
    brne .L8
    ldi r24,lo8(2)
    std Y+3,r24
    rjmp .L9
.L8:
    ldi r24,lo8(1)
    std Y+3,r24
.L9:
    ldd r24,Y+3
    std Y+4,r24
    rjmp .L10
.L7:
    ldi r24,lo8(3)
    std Y+4,r24
.L10:
    ldd r24,Y+4
    std Y+1,r24
    ldd r24,Y+1
We see that now the stack frame is 4 bytes & to  add to the woes the code is not so much readable as well.
Thus, we break a myth that complicated & compressed "C" files give compressed code. More often that not modern compilers are clever enough to do everything that is needed for optimisation. So please spare yourself the troubel and let the compiler do its job.
However, that doesn't mean that we should write inefficient code. What it means is that -  "Dont think you have optimized the code by just changing some "if" statements to "ternary"  operators. It is more than that and quite usually compiler dependent". The best way to optimize is to read the compiler manual and try to understand the compiler and its capability. Then you can use tricks in "C" to optimize the code.
 


Powered by ScribeFire.

May 30, 2009

Some fun with matlab

I have not been able to do a lot of blogging on this site of late. Procrastination is the reason. Also, I was a bit busy with my project-work ( or the lack of it!!). Here I write about a small script i wrote to convert from mat files to csv files. I searched a lot for existing solutions but alas none was available though csv to mat converters exit in bulk.  While i wrote the script i realized the reason for non existance of such a script.
In Matlab you can store different types of information into a Mat file and some of it might not be easy to represent directly in a CSV file. for example structures cannot logically be put in a CSV file but arrays are easy to convert into csv.
So i wrote a script which can convert mat to csv but with a restriction. The MAT file should not contain any structures or row array's. It can have only column array's. Ofcourse, i know it is simple to change the column to row.

I dont think i can upload a file on to this blog, so till i can put the stuff on esnips this is the only way to share it.:-(


function mat2csv(fname,target)
% MAT2CSV convert a mat file to csv file
% MAT2CSV('matfilename.mat','csvfilename.csv')
% Output is generated using the data provided in the matfile ( one col per data element)
% Unused cells will be filled with zero's
% TODO: Currently the CSV file does not have column names, this will be
% added in next version
if(nargin<2)
    error 'Incorrect Number of inputs. Refer to usage by using "help mat2csv"'
end
y = open (fname);
largest_entry =0;
flds = fieldnames(y);
for idx=1:size(flds)
    eval ( strcat('if(size(y.',char(flds(idx)),',2)> largest_entry)largest_entry = size(y.',char(flds(idx)),',2); end'));
end
eval ('outmatrix = zeros(largest_entry,size(flds,1));');
for idx=1:size(flds)
    eval ( strcat('outmatrix(1:size(y.',   char(flds(idx)),  ',2),', int2str(idx) ,') = (y.', char(flds(idx)), ');')  );
end
fl ='';
for idx=1:size(flds)
    fl = strcat(fl,char(flds(idx)),',');
end
[rc,cc] = size(outmatrix);
fid=fopen(target,'wt');
fprintf(fid,'%s\n',fl);
for idx=1:rc
   for jdx=1:cc-1
        fprintf(fid,'%d,',outmatrix(idx,jdx));
    end
    fprintf(fid,'%d\n',outmatrix(idx,jdx+1));
end
fclose(fid);
end

Powered by ScribeFire.

May 20, 2009

Faster and Hotter chips!!

I have not been able to blog here due to two reasons. One I got lazy, two is first as the same. Any way now i have woken up from my slumber. This neat article talks about a new 8 core chip ( for servers) from Intel. However, have you ever wondered why suddenly there was a paradigm shift from having faster chips (3GHz or more) to having more cores?

The main reason ( though there might be many others) is the problem of heat dissipiation. Just to give a little bit more information in this regard i will try to go a bit below the layer. The modern chips are all based on CMOS technology. Though most magazines call them as transistors they are really MOSFET's ( which means MOS Feild Effect Transistors). CMOS has this wonderful property that it consumes literally no power when it is holding a logic 'O' or logic '1'. So why the power loss? The power loss really occurs when there is a switching of the CMOS from the logic '0' to logic '1' or visa versa.

This means that more you switch the transistor the more heat you dissipiate. The more heat you dissipate the more cooling need for your chip. Now we are left with two options, either ensure that  we cool the chip well or we reduce the heat generated. To a great extent we can cool the chip by adding bigger fans and heat sinks, however there is a limit to this ( because the heat generated takes a finite amount of time to get carried out and be dissipiated!!). If we concentrate on the second option we observe that if we do not switch the transistors so fast we can control the heating. This is what Intel has chosen to do by not trying to pump up the gigahertz. However, to boost performance now there are two cores ( which run at a slower rate!!) which can parallely execute instructions which are not directly dependent on each other . The story i am sure is more than this and i am not technically competant enough to put in that stuff here. However, someone i know does. I would be requesting him to add a few lines to this blog in a few days.
Please leave your comment if you have one. You can subscribe to this blog by using the links under "Subscribe" section.




Powered by ScribeFire.

May 7, 2009

Time Out!!

This question came from a very good friend and ex-colleague. So just responding it via this post which i trust will be useful to others also.

The question

What is a "Cyclic Wakeup Timer"?

The question can be answered if we understand each of the three terms.

We start with Timer. Timer is a hardware or software that keeps track of time via counts. If we know that each count takes say 10ms then we know that 10 counts will mean 100ms. The timer is controlled by its clock which is usually derived from an external crystal or internal PLL circuits.

Next we take Cyclic. It is clear from the word that this shows a repetitive process. The timer runs continuously and maintains time. Which means i can configure a cyclic timer to create an event every 10ms. When 10ms elapses there is a event generated by the Timer (called the timer interrupt by some people). In the event handler we can choose to reinitialize the timer to count for another 10ms. Various configurations are possible, we will not discuss all of them here.

Last part is Wakeup. This is simple, we do this everyday. In this case we are talking about the microcontroller waking up.

To put it all together, a timer that wakes up the microcontroller at periodic intervals is a CWT or Cyclic wakeup timer.

The pertinent question is now, why do we need this?

I can talk only for automotive and perhaps for some other battery powered devices.

Many devices go into sleep mode when they are not doing anything useful, however the periodically wakeup to check if there is something useful to be done. I quote a few examples :

  • A PKE ( Passive Keyless Entry) system might wakeup periodically to see if there is a key in the vicinity of the vehicle. If there is then it automatically unlocks the door. (Note: It is technically quite challenging and complicated)
  • A BCM ( Body Computer Module) needs to wakeup periodically to check for monitoring certain inputs.However, this is usually because multiple external events might try to wakeup the ECU but there might not be so many interrupt pins available. 
  • I know of a system which used to maintain the time. The system would go to sleep and wakeup every 1 sec to update its time variables. Only when the vehicle was on the system would display the time else it would go to low power mode and wakeup only every 1 sec.
As you can see that the CWT is quite useful. However, it might be a tough job to handle the CWT along with other wakeup sources which try to interfere with its operation.

Hope this clears my buddy's query...

Please leave your comment if you have one. You can subscribe to this blog by using the links under "Subscribe" section.

 



Powered by ScribeFire.

May 6, 2009

Let Us Model

I am not really an expert in this domain. I switched companies about 3 months back and with that i also changed my working area to some extent. Now instead of writing code i model it.

Why does one need to model code?
The reason is that model acts like a common language between the coder and the provider of requirements. However that is not the only reason. Most of the Modeling languages these days provide a mechanism to directly convert the model into code or partial code.

Simple cases:
  • UML modeling: These days many if not all commonly used high level languages are object oriented. UML provide a very nice method to model the system in terms of classes, packages and their interdependencies etc. There are free and paid tools that are can directly convert from UML models to skeleton code.
  • Simulink modeling : Simulink is a very powerful tool available from mathworks. The tool provides you simple gui based interface to create models. These models can be fed with inputs and then the corresponding outputs can be tested for their validity etc.
In the automotive domain currently, Matlab is used very often for verification of complicated algorithms. Once the simulink models are tested extensively, then it is possible to automatically generate floating point code using RTW and Embedded Coder. This code can be directly flashed into controllers which have sufficent floating point muscle power be used to verify the functionality in the real hardware. However, more often than not, floating point muscle power comes at a heavy price and is not preffered for production programs.

So what is the next step?

You got it right!! Convert the floating point code into fixed point code. The fixed point code can run faster on simple µC's. Caution: Note that all processors are capable of doing floating point operations however, in simpler micro processors there is no dedicated hardware unit for doing this. Which means that this has to be done in software which is time consuming and memory consuming. Some processors like PPC are capable of doing this in their hardware
---------------------------------------------------------------------------------------------------------------------------
— Floating point
– IEEE® 754 compatible with software wrapper
– Single precision in hardware, double precision with software library
– Conversion instructions between single precision floating point and fixed point
---------------------------------------------------------------------------------------------------------------------------
Excerpt from PPC mannual.

Obviously we have to understand that due the limitations of the fixed point code there will be resolution error also called quantisation errors. Based on how we choose our scaling ( will talk about this later) we can ensure minimal quantisation errors. Of course, note that fixed point code is not really all that fast but ofcourse faster than floating point code ( slower than unscaled code!!) .

To conclude, these days quite often the system engineers etc use the simulink models to develop their algorithms while software developers work on the simulink models as inputs and create the fixed point code that goes into the ECU.

Please leave your comment if you have one. You can subscribe to this blog by using the links under "Subscribe" section.





Powered by ScribeFire.

May 4, 2009

Where does it run ?

This post comes in the wake up some discussions i had with few people working the embedded domain a few days back.

The question was, where exactly does our code in the micro controller run? The answer again unfortunately is not so straight forward. The reason is because different micro controllers behave in different ways. Let us therefore try to understand how the whole process works so that we can judge it better.

Frankly, the micro controllers at its core are just a bunch of registers on which a few mathematical operations can be done by another piece of hardware ( part of the core) called the ALU(Arithmatic Logic Unit). Most of the micro's thus have a Accumalator(Acc). This is like the mother register and most of the operations (mathematical or logical) will use the Acc register (There are instructions that do not involve the Acc also. like mov H,L does not involve the Acc). To ensure that some instrucution is executed, it must be first fetched, decoded and only there can be some action. To fetch an instruction, it should be readily available. Readily is a fuzzy word here because what is readily on 8Mhz system is really too slow on a 1GHz processor. The readiness depends on how fast we can read the said instrucution. The simplest micro controllers like 8051 keep the instructions in their flash memories. These are accessed by the core and then executed. Some industry folks call this as "Executing from the Flash".

Modern micro's have the concept of cache. This means they will pre-fetch and keep some of the instructions in a faster re-writeable memory. The core thus will use the cache to read the instructions instead of reading from the flash. Crude and simpler implementations of cache are in the form of instruction queue etc. These don't have the capabilities of the cache memory but never the less helps in making execution faster. So, now the instructions are stored in flash, but executed really out of cache memory.

Some micro controllers also provide a wonderful feature where in the instructions can be stored in the RAM. This is generally done while reprogramming. More about reprogramming will be there in future posts. Here the real program resides in the flash memory but one can copy the program to RAM and then set the core to fetch the instructions from the RAM instead of the flash.
Some folks call this "Execution from RAM".

The fact is that most modern micro's can read instructions from both Flash, Ram or in some cases even from communication buses (like CAN/FlexRay). The last part is tricky one and will be discussed in some other post ( because i have to get more information on that yet).

My take is this, all code gets executed in the core. All that can vary is where we are fetching it from.


Please leave your comment if you have one. You can subscribe to this blog by using the links under "Subscribe" section.







Powered by ScribeFire.

May 2, 2009

Thin in In

I am not putting up too much except this link. It talks about a new age of speakers which use thin films. This means that you can really get rid of those huge speaker boxes. As before it is sometime before one can see these in production.

Wanna
add your point or provide more info ? Please leave your comment. You
can subscribe to this blog by using the links under "Subscribe" section
.




Powered by ScribeFire.

April 28, 2009

500GB optical disc

This can make your movie library storage and backup woes history. The new technology of Micro-Holographic disc's promises to store upto 500GB of data on a single disc making it almost unbeatable in terms of storage capacity.

The disc unveiled by GE ( General Electric) concentrates on volume storage rather than surface storage ( I do not understand the former very well!!). Normal CD's and DVD's store the information on the surface of the disc via a set of 0's and 1's that get etched over its surface ( etched by use of a Lazer that changes the dye properties creating what are known as pits). Using the presence and absense of pits, which can be determined by a lens - lazer beam combination (based on the refected light into the lens) the CD-Rom drive is able to read the data on the disc.

GE believes that the technology will take a few years before it really can come as a off-the-shelf product. However, when it does i am sure it will change the way we store our data.


Wanna
add your point or provide more info ? Please leave your comment. You
can subscribe to this blog by using the links under "Subscribe" section
.




Powered by ScribeFire.

April 24, 2009

Lets do a Ctrl Alt Del

I had written about interrupts in the post here. This post is specifically about a particular interrupt which in most cases in Non-Maskable. The word Non-Maskable tells us that come what may the µC software will not be able to avoid it. This special interrupt i am talking about is "Reset".
Is Reset an Interrupt?

It is technically an interrupt however, some people feel that it is a microcontroller state. This is because when the µC is in Reset it really cannot do anything useful because the code will not be executed. However, we state it is as an interrupt because of the following reasons perhaps
  • There is a place for the Reset in the Interrupt Vector Table(IVR)
  • There are different reasons for reset to occur and in some µC's all the reasons cause the code to branch to the above mentioned vector.
What are the different types of Interrupts?

The most commonly known is the Power-On-Reset (POR) as it is commonly known in the embedded world. Note that just by applying power it is really not gaurenteed that the micro controller is undergoing a RESET. This is a very common mis-understanding that if we just apply power for the first time the micro will be under going a POR. For a µC to really under a proper reset it is needed that RESET pin is correctly handled. This is done via a reset circuit which looks like this. Note that this is for a 8051 micro which for some reason is RST high, compared to the conventional micro controllers where this is active low.
The other type which happens very frequently if you are a bad programmer like me is the Running reset. A running reset can be caused by many different sources. These include Watchdog reset, Illegal opcode reset, Illegal memory access reset and finally software reset. Each of these will be discussed in greater detail in the coming posts. As of now it is sufficient to know that this occurs because of something bad that has been caused in our software or in some cases we intentionally did it ( because we feel the best way is that way!!). In both cases the CPU starts from the Reset vector and starts executing code ( Except for in higher end 32 bit micro's where Reset vector is not the only criteria!!). Of course it is also useful to note that some µC's provide various vectors for WDT reset, Illegal opcode reset (PowerPC from Freescale is an good example).
Reset handling is a tricky issue for the hardware designers and not so much for the software guys. In coming posts, I will talk about some of the issues i have observed.

Wanna add your point or provide more info ? Please leave your comment. You can subscribe to this blog by using the links under "Subscribe" section.