Friday, February 14, 2014

HOW DO SD MEMORY CARDS FOR CAMERAS WORK (LARGE MEMORY)?




blondy2006


JUST CURIOUS ON HOW SD MEMORY CARDS WORK? IS IT THE MORE GB THE MORE IT HOLDS?? LIKE IS 8GB GOING TO HOLD MORE PICTURES THAN A 2GB?? I HAVE A SAMSUNG TL205 PLEASE HELP!!!


Answer
Memory Cards are storage devices that hold images and videos recorded using cameras. They are semiconductor flash devices that use electronic devices (transistors) to store data as series of high and low voltages, digitally referred to as 1 and 0 in binary logic. Memory cards are Random Access devices.

Higher the GB-rating of a memory card, more the number of electronic storage slots it has and therefore higher is its data/image/video storage capacity. A 1 or 0 mentioned above is referred to as a bit (b). 8 bits together form a byte (B). 1024 bytes together form what is called a kilobyte (KB). Thus 1 KB has 1024 bytes or 1024*8=8192 bits. Similarly, 1 MB has a capacity of over 1 million bytes, i.e.- 1 MB = 1024*1024 B. In the same way, 1 GB = 1024*1024*1024 B = 1073741824 B. So 1 GB has a capacity to store 1073741824 bytes or 1073741824*8 = 8589934592 bits.

In fundamental terms, 1 GB memory card has over 8.5 billion storage locations to data as high or low voltages digitally. Obviously a 8 GB card will have a higher capacity than 2 GB one due to 4 times more electronic devices for storage in it.

However, an SD card's GB-rating alone doesn't indicate the number of images/videos it can store. The image and video resolution and format too dictates it. The higher each image and video weighs, lesser the card's capacity becomes.

The same SD card when used in a low resolution camera, say 12.1 MP will store more number of images than in a high, say 14.2 MP camera. This is because with increasing resolution, the digital approximation generates more samples, resulting in smaller and hence greater number of pixels. As such for the same area of a photograph, there are more pixels in higher resolution and hence more digital samples/data to store on the SD card. More pixels ultimately lead to a clearer image. The same holds for videos too. The scanning technique- Interlaced or Progressive, and the resolution hold the key here. A 720p video is of inferior quality to a 1080p video. The same SD card will thus hold longer 720p videos than 1080p ones.

Image and video size is also dictated by formats used. The formats like JPG, MOV, MPEG, etc are what I mean here. These are compression algorithms that compress the voluminous digital data gathered from images and videos into compact files. Hence the need of codecs (to decompress) while opening a video! :) One particular format may compress the data more than other, increasing the number of images/videos per SD Card. However compression damages image/video quality. Panasonic cameras usually are known to occupy more per image than Canon/Nikon cameras.

Camera modes also dictate the size of each image/video and hence the number of data per SD Card. A 3D recording will cost more memory slots than a normal 2D one!

More from some expert on the subject. This was just a layman introduction from my side. :)

How are video games projected on screen?




Lp182


I know that we aren't actually looking at a constant moving video game and that all we are seeing are multiple frames per second. But how is it done? I'm just curious how they do it. And are video games only projected at 30 frames and 60 frames per second, or are there other # of frames that can be shown in a given second?


Answer
The numbers 30 and 60 are a side effect of the NTSC format of television. Even back in the black-and-white days of TV, NTSC was a 640X480 image, interlaced (draws all the odd lines in one pass, then draws all the even lines on the next), and operating at 60 Hz. It wasn't until about 10 years ago that consumers started seeing any change to this, when EDTVs came on the scene and changed the interlaced to Progressive Scan (all lines are drawn sequentially on each pass), and then HDTVs changing the image resolution. Very recently, sets that can do 120 Hz have come onto the market. In Europe, they uses the PAL format, which ran at 50 Hz, but switched to 60 Hz for HDTV to simplify things for TV manufacturers (who had started to sell sets in Europe that could handle both 50 Hz and 60 Hz).

So for the longest time, TVs were displaying 60 half-frames (due to interlacing). As such, having a game run at 60 Hz would result in slightly smoother animation, but a lot more processing power which could otherwise be used to enhance the detail. So typically 60 frames games were smoother, but 30 frames games were much more detailed. Any other rate, and you the smoothness will vary, which is rather jarring to the immersiveness factor.

All of this only applies to consoles, though, since PCs have always had monitors that could do a variety of framerates, and therfore their games always strived for the highest framerate possible (and where 60 Hz is on the far low end) and don't really worry about the framerate dipping at times. High end modern monitors can typically do at least 140 Hz, and some older games on new hardware can actually create several hundred frames per second if certain settings are disabled.

Regardless of PC or console, the same programming technique is used for creating the frames and managing the process of sending them to the screen. Each object is placed in position in a virtual 3D space in RAM, textures are applied (only if they are facing the virtual camera's location), the view from the virtual camera is established, and the objects that the camera can see are flattened into a 2D image. This whole process is called rendering.

Then, the rendered image is put into a piece of memory (usually in the graphics chip itself) that has been designated as a "frame". This frame is then sent to the TV/monitor, and as it is being sent, a second image is being rendered and put into a second "frame". Once the first frame has been sent to the screen completely, the second frame is designated as the primary frame, and a third image is rendered and overwrites the first frame. This process is called Frame Buffering.

PCs (and maybe HDMI TV connections, I'm not sure) have the capability of the monitor being able to send a signal back to the game program and tell it when a frame is done being drawn. This allows the game to not switch frames while it is being sent to the screen (vertical synch), preventing a top section of an image and a bottom section of the same image displayed not matching up, refered to as "tearing". Since consoles know what frequency the TV is operating at based on the region (or more recently in Europe, through an option in the settings for either the game or the system), they can simply use an internal timer as an artificial vertical synch.




Powered by Yahoo! Answers

Title Post: HOW DO SD MEMORY CARDS FOR CAMERAS WORK (LARGE MEMORY)?
Rating: 100% based on 99998 ratings. 5 user reviews.
Author: Yukie

Thanks For Coming To My Blog

No comments:

Post a Comment