Upload
derek-wood
View
217
Download
0
Tags:
Embed Size (px)
Citation preview
Data Hiding in Image and Data Hiding in Image and VideoVideo
Part I: Fundamental Part I: Fundamental Issues and SolutionsIssues and Solutions
ECE 738 Class PresentationECE 738 Class PresentationBy Tanaphol ThaipanichBy Tanaphol Thaipanich
[email protected]@wisc.edu
IntroductionIntroduction
Data hiding/Digital watermark = Data hiding/Digital watermark = schemes to embed secondary data in schemes to embed secondary data in digital mediadigital media
Application: Ownership Protection, Application: Ownership Protection, Access control, AuthenticationAccess control, Authentication
Data hiding = Communication Data hiding = Communication problemproblem Embedded Data = Signal to be Embedded Data = Signal to be
transmittedtransmitted
IntroductionIntroduction
Embedding Capacity Vs RobustnessEmbedding Capacity Vs Robustness DistortionDistortion – imperceptibly small for – imperceptibly small for
commercial or artistic reasoncommercial or artistic reason Actual Noise ConditionsActual Noise Conditions
Overestimate – waste capacityOverestimate – waste capacity Underestimate – corruption of embedded bitsUnderestimate – corruption of embedded bits
Uneven distribution of embedding Uneven distribution of embedding capacitycapacity # of embeddable bit varies from location to # of embeddable bit varies from location to
locationlocation
Data Hiding FrameworkData Hiding Framework
Key Elements in Data Key Elements in Data Hiding SystemHiding System
Upper Layers Build on top to
obtain additional functionalities
Three key elementsThree key elements
(1)(1)Mechanism for embedding one bitMechanism for embedding one bit(2)(2)Perceptual model Perceptual model imperceptibility imperceptibility(3)(3)Modulation/Multiplexing techniques Modulation/Multiplexing techniques
multiples bits.multiples bits.
Two Basic Embedded Two Basic Embedded Mechanisms Mechanisms
Type – I: Additive EmbeddingType – I: Additive Embedding Adding secondary data to host signal Adding secondary data to host signal
II11 – I – I00 = f(b) = f(b)
II0 0 = Major noise source= Major noise source
Knowledge of IKnowledge of I0 0 will enhance detection will enhance detection performanceperformance
Two Basic Embedded Two Basic Embedded Mechanisms Mechanisms
Type – II: Relationship Enforcement Type – II: Relationship Enforcement EmbeddingEmbedding Deterministic enforcing relationship Deterministic enforcing relationship b b
= g(I= g(I11))
Minimize perceptual distortion Minimize perceptual distortion I I1 1
closes to Icloses to I00
Doesn’t need a knowledge of IDoesn’t need a knowledge of I00 Information about b carried in IInformation about b carried in I11
Comparison for Type I & Comparison for Type I & IIII
Capacity Vs Robustness under “Blind Capacity Vs Robustness under “Blind Detection”Detection”
Simplified Additive Model (Type I)Simplified Additive Model (Type I)
For simplicity, MFor simplicity, Mi i = i.i.d N(0 , = i.i.d N(0 , σσMM22))
Optimal detector (Min prob. Of error)Optimal detector (Min prob. Of error) Detection Statistic Detection Statistic Normalized correlation Normalized correlation
Comparison for Type I & Comparison for Type I & IIII
TTN N is Gaussian dist with unit is Gaussian dist with unit variance and this following meanvariance and this following mean
Minimize probability of errorMinimize probability of error Raise the ratio of total watermark Raise the ratio of total watermark
energy to noise powerenergy to noise power What should we do?What should we do?
Comparison for Type I & Comparison for Type I & IIII
Given the same noise powerGiven the same noise power A watermark with higher power A watermark with higher power distortion distortion Using longer signal Using longer signal lower embedding lower embedding
capacitycapacity
Comparison for Type I & Comparison for Type I & IIII
Type II – no interference from host media Type II – no interference from host media & coding one bit in small number of host & coding one bit in small number of host components components High capacity High capacity
Odd – Even embeddingOdd – Even embedding
Comparison for Type I & Comparison for Type I & IIII
Robustness comes from quantization Robustness comes from quantization or tolerance zoneor tolerance zone Large Q = more tolerance (-Q/2 , Q/2)Large Q = more tolerance (-Q/2 , Q/2)
Assume host components within +/- Assume host components within +/- Q of kQ to be uniform distribution Q of kQ to be uniform distribution MSE = 1/3*QMSE = 1/3*Q22
Large Q = larger distortionLarge Q = larger distortion
Comparison for Type I & Comparison for Type I & IIII
Type IType I:: Excellent robustness and invisibility Excellent robustness and invisibility when original host is available. For blind when original host is available. For blind detection, using longer watermark to present detection, using longer watermark to present 1 bit = More robust but less capacity1 bit = More robust but less capacity
Type IIType II:: High data-rate data hiding High data-rate data hiding applications that do not have to survive noiseapplications that do not have to survive noise
Quantified Capacity Quantified Capacity StudyStudy
Type I embeddingType I embedding Channel model = CICOChannel model = CICO Additive noise Additive noise Host interference & Host interference &
Processing Noise – i.i.d Gaussian distProcessing Noise – i.i.d Gaussian dist Shannon channel capacity Shannon channel capacity C = W log C = W log2 2
(1+S/N)(1+S/N)
AA2 2 = power of embedded signal= power of embedded signal σσII
22 = power of host signal = power of host signal σσ2 2 = power of processing noise ( = power of processing noise ( σσII
22 >> >> σσ2 2 )) W = ½ (MTF50)W = ½ (MTF50)
Quantified Capacity Quantified Capacity StudyStudy
Type II embeddingType II embedding Channel model = DIDOChannel model = DIDO
CCDIDODIDO = 1 - h = 1 - hP P ; h; hP P = binary entropy= binary entropy hhP P = p.log(1/p) + (1-p).log(1/(1-p)) and = p.log(1/p) + (1-p).log(1/(1-p)) and
Capacity Comparison for Capacity Comparison for Type I&IIType I&II
Problem settingProblem setting -- fixing MSEfixing MSE introduced introduced by embedding process to Eby embedding process to E22 – control – control perceptual qualityperceptual quality
Type I:Type I: power of embedded signal = power of embedded signal = EE22
Gaussian processing noise – Gaussian processing noise – σσ22
Host interference – Host interference – σσI I = 10E= 10E
Type II:Type II: MSE = EMSE = E22 Q= Q= E3
Capacity Comparison for Capacity Comparison for Type I&IIType I&II
Type I:Type I: Suitable for strong noise condition Suitable for strong noise conditionType II:Type II: Useful under low noise condition Useful under low noise condition
Multi-level EmbeddingMulti-level Embedding
Design by targeting single WNRDesign by targeting single WNR (watermark-to-noise ratio)(watermark-to-noise ratio) Actual noise is stronger Actual noise is stronger no no
extractable dataextractable data Actual noise is weaker Actual noise is weaker waste waste
embedding capacityembedding capacity
Multi-level EmbeddingMulti-level Embedding
Use two targeted value of WNRUse two targeted value of WNR Fraction Fraction αα11 of the embedded data survive WNR of the embedded data survive WNR
of xof x11 and all embedded data survive a higher and all embedded data survive a higher WNR of xWNR of x22
Too many embedding level Too many embedding level degradation degradation
Handling Uneven Handling Uneven Embedding CapacityEmbedding Capacity
Unevenly distributed embedding capacity comes from the non-stationary nature of perceptual sources. Ex) Changes made in smooth areas are
easier to be perceived than those in texture areas
Goal: embed as many bits as possible in each region (Highest capacity) conveying side information = large overhead Lower capacity
Handling Uneven Handling Uneven Embedding CapacityEmbedding Capacity
Easy way to overcome high overhead Easy way to overcome high overhead problemproblem Embed a fixed number of bits in each region Embed a fixed number of bits in each region
= no need for side information = no need for side information CER CER (Constant Embedding Rate)(Constant Embedding Rate)
Need to ensure thatNeed to ensure that Fixed number of bits is smallFixed number of bits is small Size of each region is largeSize of each region is large
Result in significant waste in embedding Result in significant waste in embedding capacitycapacity
Does increasing region’s size really help?Does increasing region’s size really help?
Handling Uneven Handling Uneven Embedding CapacityEmbedding Capacity
Embeddable or unembeddable Embeddable or unembeddable Blockwise DCT transform (using 8x8 Blockwise DCT transform (using 8x8 16x16) 16x16) Compare magnitude of DC/AC coefficient with Compare magnitude of DC/AC coefficient with
Perceptual ThresholdPerceptual Threshold Smooth region = no embeddable coefficientsSmooth region = no embeddable coefficients
Handling Uneven Handling Uneven Embedding CapacityEmbedding Capacity
20% of 8x8 Block are 20% of 8x8 Block are smoothsmooth
15% of 16x16 Block 15% of 16x16 Block are smoothare smooth
Increase block size is Increase block size is ineffective in ineffective in reducing the number reducing the number of segment with zero of segment with zero embeddable embeddable coefficientcoefficient
What should we do?What should we do?
Backup EmbeddingBackup Embedding Embed the same data in multiple areaEmbed the same data in multiple area Backup embedding with L location = Backup embedding with L location =
Increase block size by L time – Why is it Increase block size by L time – Why is it better?better?
Shuffling is generalization of backup Shuffling is generalization of backup embeddingembedding Block size = 1 and locations are specified by Block size = 1 and locations are specified by
permutation functionpermutation function
Quick QuestionsQuick Questions
Q1: Why embedding capacity is Q1: Why embedding capacity is important?important?
Q2: MTF – Q2: MTF – Modulation Transfer FunctionModulation Transfer Function
Q3: Shuffle?Q3: Shuffle?
ShufflingShuffling
Random/Non-random Random/Non-random permutation function permutation function Focus on “complete random permutation”Focus on “complete random permutation”
All permutations are equiprobable (1/S!)All permutations are equiprobable (1/S!)
ShufflingShuffling
Complete random permutationComplete random permutation mmrr/N = the fraction of segments having /N = the fraction of segments having
r embeddable coefficientsr embeddable coefficients Considering simple scenario Considering simple scenario throw throw
balls in holesballs in holes
ShufflingShuffling
ShufflingShuffling Before: 20% of segments have no embeddable Before: 20% of segments have no embeddable
coefficient (case: 8x8)coefficient (case: 8x8) After shuffling: E[mAfter shuffling: E[m00/N] = 0.002% /N] = 0.002% 0.086 Block 0.086 Block Most segments have 5 -15 embeddable Most segments have 5 -15 embeddable
coefficients coefficients
ShufflingShuffling
Image with high fraction of Image with high fraction of embeddable coefficients embeddable coefficients use small use small segment = high capacitysegment = high capacity
Smooth image Smooth image use large segment use large segment to ensure no region with zero to ensure no region with zero embeddable coefficientembeddable coefficient
What actually is shuffling ?What actually is shuffling ? Allocate embeddable coefficients from Allocate embeddable coefficients from
non-smooth region to smooth regionnon-smooth region to smooth region Any drawback?Any drawback?
Practical ConsiderationPractical Consideration
Shuffling = very low probability of Shuffling = very low probability of getting block with no embeddable getting block with no embeddable coefficient BUT there is still a chancecoefficient BUT there is still a chance Solution: use primary and secondary Solution: use primary and secondary
shuffle – significantly different from each shuffle – significantly different from each other other
As discuss before, Shuffling increases As discuss before, Shuffling increases sensitivity but it is alright for some sensitivity but it is alright for some applications that benefit comes from applications that benefit comes from hidden datahidden data
Variable Embedding Rate Variable Embedding Rate (VER)(VER)
Allow more data to be embedded if the Allow more data to be embedded if the average overhead is relatively small average overhead is relatively small compared with average embedding capacity compared with average embedding capacity per segmentper segment
Main issue = How to convey side Main issue = How to convey side informationinformation Same/Different embedding mechanismSame/Different embedding mechanism Allocate more energy to side information = more Allocate more energy to side information = more
robust but reduce capacityrobust but reduce capacity Q: Embedding side information using same Q: Embedding side information using same
embedding mechanism – “Key in locked box”embedding mechanism – “Key in locked box”
Variable Embedding Rate Variable Embedding Rate (VER)(VER)
Part of embedded data are pre-Part of embedded data are pre-determineddetermined
Ex: Detector decode data using all Ex: Detector decode data using all candidate shuffles candidate shuffles Accurately Accurately decoding = actual mechanismdecoding = actual mechanism
Same idea for segment size (to Same idea for segment size (to lessen complexity, using primary / lessen complexity, using primary / secondary size secondary size
Thank youThank you
Have question?Have question?
Reach me at Reach me at [email protected]@wisc.edu
PPT – PPT – http://homepages.cae.wisc.edu/~thaipahttp://homepages.cae.wisc.edu/~thaipa
ni/ni/
MTFMTF
MTF – Spatial frequency responseMTF – Spatial frequency response MTF applies only in the horizontal MTF applies only in the horizontal
directiondirection MTFMTF50 50 , MTF, MTF10 10 , MTF, MTF2 2 - Cut-off- Cut-off
Go Back