معلومات البحث الكاملة في مستودع بيانات الجامعة

عنوان البحث(Papers / Research Title)


CHARACTRIZE-BASED IMAGE COMPRESSION


الناشر \ المحرر \ الكاتب (Author / Editor / Publisher)

 
توفيق عبد الخالق عباس الاسدي

Citation Information


توفيق,عبد,الخالق,عباس,الاسدي ,CHARACTRIZE-BASED IMAGE COMPRESSION , Time 5/10/2011 6:00:03 PM : كلية تكنولوجيا المعلومات

وصف الابستركت (Abstract)


CHARACTRIZE-BASED IMAGE COMPRESSION

الوصف الكامل (Full Abstract)

Abstract
 
This paper presents a method  which can be summarized into four stages the first stage is  dividing the image into number of blocks depending on segmentation process ,second stage is testing to determine the optimal  compression method  for each block, and the third is applying each method on the corresponding block . in this work we use the  following compression method   (RLE, LZW, and BTC).
 


Intrudiction:
 
  
we can describe data compression as a method that takes an input data D and generates a shorter representation of the data c(D) with a fewer number of bits compared to that of D. The reverse process is called decompression, which takes the compressed data c(D) and generates or reconstructs the data D’. The reconstructed data D’ could be identical to the original data D or it could be an approximation of the original data D , depending on the reconstruction requirements. If the reconstructed data D‘ is an exact replica of the original data D , we call the algorithm applied to compress D and decompress c ( D ) to be lossless. On the other hand, we say the algorithms are lossy when D’ is not an exact replica of D . Hence as far as the reversibility of the original data is concerned, the data compression algorithms can be broadly classified in two categories - lossless and lossy .Data comprise a significant portion of the multimedia data and they occupy the lion’s share of the communication bandwidth for multimedia communication. As a result, development of efficient image compression techniques continues to be an important challenge to us, both in academia and in industry. Data compression is the technique to reduce the redundancies in data representation in order to decrease data storage equirements and hence communication costs.  Reducing the storage requirement is equivalent to increasing the capacity of the storage medium and hence communication bandwidth. Thus the development of efficient compression techniques will continue to be a design challenge for future communication systems and advanced multimedia applications.  Because of the reduced  data rate offered by the compression techniques, computer network and Internet usage is becoming more and more image and graphic friendly, rather than being just data- and text-centric phenomena. In short, high-performance compression has created new opportunities of creative  applications such as digital library, digital archiving, videoteleconferencing, telemedicine  , and digital entertainment, to name a few.There are many other secondary advantages in data compression. Forexample, it has great implications in database access. Data  compression may enhance the database performance because more compressed records can be packed in a given buffer space in a traditional computer implementation.Data security can also be greatly enhanced by encrypting the decoding parameters and transmitting them separately from the compressed database files to restrict access of proprietary information. An extra level of security can be achieved by making the compression and decompression processes totally transparent to unauthorized users.Data compression generally reduces the reliability of the data records. Forexample, a single bit error in compressed code will cause the decoder to misinterpret all subsequent bits, producing incorrect data.Transmission of very sensitive compressed data (e.g., medical information) through a noisy communication channel (such as wireless media) is risky because the burst errors introduced by the noisy channel can destroy the transmitted data. Another problem of data compression is the disruption of data properties, since the compressed data is different from the original dataData compression schemes could be static or dynamic. In static methods,the mapping from a set of messages (data or signal) to the correspondingset of compressed codes is always fixed. In dynamic methods, the mappingfrom the set of messages to the set of compressed codes changes over time [1].Techniques such as Shanon-Fano coding[ 2 ], and Huffman coding [1  ] use redundancy-reduction mechanism which result in shorter codes for more frequently appearing samples. t is necessary to scan the data samples in order to determine their probabilities of occurrence and create an appropriate code.RLE  [ 3 ] is another redundancy-reduction coding method where in a scan-line each run of symbols is coded as  a pair that specifies the symbol and the length of run. Transform coding (include Cosine/Sine [ 4 ],Fourier[2 ], Hadamard[2 ], Wavelet [5  ], Slant [ 6 ], and Principal-Component [  1] ),subband coding[2 ],vector quantization [2 ], and predictive coding [1  ] are among the ones that have achieved high levels of lossy compression. LZW [8 ] modifies the string of symbols to be equal to a phrase in the dictionary; it means already occurred sequences [7 ]  Dear visitor,For downloading the full version of the research/article click on the pdf icon above.

تحميل الملف المرفق Download Attached File

تحميل الملف من سيرفر شبكة جامعة بابل (Paper Link on Network Server) repository publications

البحث في الموقع

Authors, Titles, Abstracts

Full Text




خيارات العرض والخدمات


وصلات مرتبطة بهذا البحث