/*
Videocrypt Cryptanalysis
------------------------
Markus Kuhn, University of Erlangen -- 1994-07-11
The files in this directory are TV pictures that have been received
from the ASTRA 1A satellite, transponder 8, station "Sky One". These
pictures have all been taken from a Star Trek TNG episode. The
original files are:
d-raw.ppm A picture without encryption showing "Deana Troi"
d-vc1.ppm The same scene (~2 frames later) with encryption
e-vc1.ppm An encrypted starship in space
r-raw.ppm Commander Riker on the bridge, not encrypted
r-vc1.ppm Same picture (~2 frames earlier) encrypted
r-vc2.ppm Same picture (~4 frames earlier) encrypted
Some of these files are available here in JPEG compressed format. Use
the djpeg program from to decompress these into
PPM. For other file formats, try .
The images contain a PAL half frame (lines doubled) in an RGB color
space and were converted with a frame grabber (Parallax XVideo Board)
on a Sun SPARCstation 10.
The grayscale file
r-vc1.decrypted.pgm
demonstrates the results of an Videocrypt prototype cryptoanalysis
algorithm which I have developed. The only input to this algorithm was
the file r-vc1.ppm when this result was produced, NO information from
SmartCards or Videocrypt clone chips/cards was used. The algorithm is
still under development. Suggestions for improved algorithms are
welcome!
The file antisky.c contains the ANSI C source of the program that I
used to decrypt the image. E.g.
antisky -1 -r20 r-vc1.ppm r-vc1.decrypted.pgm
produced the example file (~7 seconds on a Sun SPARCstation 10). Try
also
antisky -1 -r20 -bc r-vc1.ppm r-vc1.decrypted.pgm
in order to see how the image looks after the first algorithm. The
additional line you'll see (option -c) is the edge formed by the
left/right border of the image which is detected by the second
algorithm. The first algorithm (cross-correlation) matches two
consecutive lines and shifts the lower line so that it matches the
next higher one. The second algorithm searches the border so that the
drift of the first algorithm and the total offset (which is inherited
by the first line) can be corrected.
Some theory:
Videocrypt rotates individual lines, or in other words, every line is
cut at a secret point in two parts and then both parts are
exchanged. I.e. if an original line in the pixtures was
0123456789
(each digit represents one pixel), then the rotated version (here with
offset 3) looks like
7890123456
What the first step of the ANTISKY algorithm is doing is only to
compare this rotated line in all 10 offsets
7890123456
6789012345
5678901234
...
9012345678
8901234567
with the previous line. The measure of how good this line compares in
one particular offset to the previous one is the sum of the products
between pixels in the same column. In the output picture, consecutive
lines are rotated relative to each other, so that this measure is
maximized. The first line is not touched.
The general idea is quite simple, the question is how to implement
this idea as efficient as possible. The technical details of the
implementation are a little bit tricky and you won't understand them
without some mathematical background (fast fourier transform, real
FFT, cross correlation, convolution -> simple multiplication in
freqency domain, zero padding, etc.). If you want to learn, how all
this works, then definitely read the chapters 12.0-12.3 and 13.0-13.2
in the book
William H. Press, Saul A. Teukolsky, William T. Vetterling:
Numerical Recipes in C : The Art of Scientific Computing.
Second edition, Cambridge University Press, 1993,
ISBN 0521431085.
or the chapters about FFT, convolution, and cross-correlation in any
introductory book about digital signal processing. It is really worth
to invest some time into studying this. I believe the Fast Fourier
Transform (FFT) is one of the coolest and most useful algorithms ever
invented.
Some care has to be taken prior to this algorithm, because frames
received from a frame grabber often look like this:
@@789012345678@ (@ is a black pixel)
i.e. they have additional black borders and the left and right ends of
the line might overlap. Before the ANTISKY algorithm can produce good
results, you have to cut off black and doubled pixels (e.g. 3 from the
left and 2 from the right with options -l3 -r2). If you don't remove
these additional pixels, the corrected line would look like
012345678@@@789
i.e. you get short black lines in the middle of the decoded image.
After the first step, the image has 2 defects (which you can see with
option -c):
- the overall offset is the (unknown) offset of the first line
- there are sometimes between 0 and ~3 pixels error in the offset, so the
whole picture slowly drifts to the left or right.
In order to compensate these defects, the second step of the ANTISKY
algorithm searches the edge formed by the left and right border of the
image which is not visible in a correctly aligned image. The lines are
then rotated such that this edge vanishes. The edge detector algorithm
uses a dynamic programming technique, that is it calculates the
cheapest path from the top line down to any pixel based on the
cheapest path to the pixels in the line above. A path is "cheaper"
here if it goes along a high-contrast edge and if is does not deviate
too much from a vertical line (see the code for the exact cost
function used, I do not claim this one is optimal in any sense). If
you have no idea, what "dynamic programming" graph search algorithms
are, then read first of all the relevant chapter in
Cormen/Leiserson/Rivest: "Introduction to Algorithms", MIT Press,
1991, ISBN 0-262-03141-8.
It is not easy to find the correct edge which represents the image
margins in a picture, because some pictures contain many edges running
nearly vertically over the screen and some of them have often better
contrast than the left/right margin. But fortunately, a special
property of the videocrypt system allows to find the correct edge
quite safely. The following idea is implemented since antisky 0.92. If
you use option -m in order to make the cutting points in the image
visible, you'll discover, that cutting points appear never nearer than
pixels to the image border and is about 12% of the
image width. So the edge detection algorithm has to avoid getting
nearer than pixles to any cutting point which excludes many
of the "alternative edges" that have confused previous releases of the
algorithm. You can make these deadzones around cutting points visible
with option -d.
Option -f allows you to reduce the horizontal resolution of the
algorithm to 1/2. This doubles the speed, but reduces image
quality. You can apply -f several times, e.g. -fff reduces the
resolution to 1/8 (which causes already severe distortions). Another
method of increasing the speed dramatically is to make the source
image smaller. An image that is obtained from the source image by
taking only every 4th pixel in horizontal and vertical direction
(i.e. which has only 1/16 of the original number of pixels) boosts the
decoders speed so that real-time TV is possible on good workstation
clusters (e.g. our fastest HP system here needs only 0.2s for such a
frame, so with 10 workstations you have nearly 50 Hz :-).
TODO:
- Faster cross-correlation. This consumes nearly all the computing
time at the moment and the FFT approach is quite well optimized
since antisky 0.9. Anyone with better ideas (hierarchical
matching, DSP, FFT processors, ...?).
- Better edge detection (I have some ideas, but have not yet tried
them). A lot of experimentation has to be done here.
- Write drivers for the real time video recording harddisk cluster
of the local computer graphics department. It might be used to
decode a whole film during a night on a workstation cluster and
put it back on video tape.
- Color. The current software already supports it, but this is
useless with our frame grabber (and every frame grabber I've seen
so far). Problem: PAL decoder average the color vectors of the
lines and if consecutive lines haven't similar colors, you'll get
a random gray. A modified frame grabber hardware that doesn't
implement the PAL standard color correction would be necessary!
Experimental code for pure software PAL color correction is in the
software since version 0.9, but it didn't work and thus has been
deactivated with an #if 0. If you want to play around with the
idea of doing the entire PAL decoding in software, you'll find on
a suitable code
written by William Andrew Steer . [That web
page vanished in 2002, but an archival copy can be found, for
example, on ].
A few more words about the color problem with Videocrypt, PAL and your
frame grabber:
The RGB video signal from the camera source is separated in the PAL
system in a luminance and a color (chrominance) part. Unfortunately,
every standard PAL frame grabber is confused by the videocrypt line
rotation and it destroys the chrominance part. One tricky detail of
the PAL system is that the chrominance parts of two lines are averaged
in order to avoid the phase-color problems (green faces, etc.) as they
appear in NTSC. But in Videocrypt, pairs of lines don't fit together,
so the averaged colors in the RGB image you get from your frame
grabber are always very chaotic. In order to allow antisky to compare
the lines correctly, only the part of the video signal which has
survived the frame grabber's PAL decoder, i.e. the luminance signal,
may be used. The luminance signal is what you see on an old
black/white TV set. According to the CCIR PAL standard, the luminance
Y of an RGB pixel is
Y = R * 0.299 + G * 0.587 + B * 0.114.
The grayscale image that you get with these Y values should now be
undisturbed if your frame grabber strictly conforms to the PAL
standard. If your frame grabber allows you to store the image in YUV
format, then you should put only the Y component in a PGM file for
antisky as the frame grabber gives you already separated the luminance
Y and chrominance (U,V). If you give PPM RGB files to antisky, the
program will automatically extract the Y signal using the above
formula.
Markus
http://www.cl.cam.ac.uk/~mgk25/tv-crypt/image-processing/
*/
/*
* ANTISKY -- Videocrypt cryptoanalysis algorithm
*
* This program reconstructs TV images that have been encrypted
* by pay-TV broadcasting stations that use the Videocrypt encryption
* system. The algorithm uses only the statistical properties of normal
* TV pictures and requires NO secret information that is stored
* in the Videocrypt SmartCards or their illegal clones.
*
* This software is only provided for scientific demonstration
* purposes as an example of an interesting signal processing
* algorithm. The author rejects any responsibility for pay-TV
* piracy abuses of this software.
*
* Markus Kuhn
*
* $Id: antisky.c,v 1.5 1994/02/20 15:14:01 mskuhn Exp $
*/
#include
#include
#include
#include
#ifndef SEEK_CUR
#define SEEK_CUR 1
#endif
#define USE_FFT
typedef struct {
FILE *f;
enum {PGM_raw, PPM_raw} type;
int xmax, ymax;
} G_FILE;
typedef unsigned char RGB_line[3];
float *twr[2], *twi[2]; /* base vectors for FFT */
char version[] = "ANTISKY 0.92 -- Videocrypt cryptoanalysis algorithm"
" -- Markus Kuhn\n";
char usage[] = "%s\n%s [options] [ | -] []\n"
"\nThe encrypted file is a single frame of a video sequence that has been\n"
"encrypted using the Videocrypt system. The output file will be a b/w\n"
"PGM file with the decrypted image. If no filenames are given, stdin and\n"
"stdout are used.\n\ncommand line options:\n\n"
" -r The number of pixel columns that should be\n"
" removed from the right margin of the source\n"
" image. It is very important that this parameter\n"
" or -l is adjusted correctly, because the source\n"
" image might contain a few pixels doubled on\n"
" both sides of the image which will after\n"
" decoding be inserted somewhere in the middle\n"
" of the image and cause matching problems.\n"
" -l Number of pixel columns to be removed from the\n"
" left margin before decoding starts. If your\n"
" frame grabber has cut off too many pixels at\n"
" the borders, try negative values here.\n"
" -v Verbose. Print some additional information.\n"
" -b Mark border found by edge detection algorithm.\n"
" Best applyied together with -c.\n"
" -m Mark the positions of the cutting points in the\n"
" encoded image (only for demonstration).\n"
" -d Mark the deadzones near cutting points where\n"
" no image border can be (only for demonstration).\n"
" -c Do only maximize the interline cross-correlation\n"
" (algorithm 1) and do not search the image margin\n"
" and correct it. (Only for demonstration of\n"
" how the two steps of the process work.)\n"
" -o Use the input format instead of PGM for output\n"
" (currently also supported: PPM).\n"
" -f Reduce the horizontal resolution to 50%%. This\n"
" makes the program faster, but reduces quality.\n"
" -0 Process all lines (default).\n"
" -1 Process only odd lines and double them in\n"
" the output file.\n"
" -2 Process only even lines and double them.\n"
"\n";
void *checkedmalloc(size_t n) {
void *p;
if ((p = malloc(n)) == NULL) {
fprintf(stderr, "Sorry, not enough memory available!\n");
exit(1);
}
return p;
}
/* generate vector tables for FFT */
void init_fft(int nn)
{
int n, mmax, m, istep;
double wtemp, wpr, wpi, theta;
int c, v, isign;
n=nn << 1;
c = 0;
mmax = 2;
while (n > mmax) {
istep = mmax << 1;
c += mmax / 2 + 1;
mmax = istep;
}
for (v = 0; v <= 1; v++) {
twr[v] = (float *) checkedmalloc(c * sizeof(float));
twi[v] = (float *) checkedmalloc(c * sizeof(float));
isign = v ? -1 : 1;
c = 0;
mmax = 2;
while (n > mmax) {
istep = mmax << 1;
theta = isign * (6.28318530717959 / mmax);
wtemp = sin(0.5 * theta);
wpr = -2.0 * wtemp * wtemp;
wpi = sin(theta);
twr[v][c]=1.0;
twi[v][c]=0.0;
c++;
for (m = 1; m < mmax; m += 2) {
twr[v][c] = twr[v][c-1] * wpr - twi[v][c-1] * wpi + twr[v][c-1];
twi[v][c] = twi[v][c-1] * wpr + twr[v][c-1] * wpi + twi[v][c-1];
c++;
}
mmax = istep;
}
}
return;
}
/* 1-D real FFT implementation */
#define SWAP(a,b) tempr=(a);(a)=(b);(b)=tempr
void fft(float data[], int nn, int isign)
{
int n, mmax, m, j, istep, i;
float wr, wi;
float tempr, tempi;
float *ptwr, *ptwi;
n = nn << 1;
j = 1;
for (i = 1;i < n;i += 2) {
if (j > i) {
SWAP(data[j], data[i]);
SWAP(data[j+1], data[i+1]);
}
m = n >> 1;
while (m >= 2 && j > m) {
j -= m;
m >>= 1;
}
j += m;
}
ptwr = twr[isign < 0];
ptwi = twi[isign < 0];
mmax = 2;
while (n > mmax) {
istep = mmax << 1;
wr = *(ptwr++);
wi = *(ptwi++);
for (m = 1; m < mmax; m += 2) {
for (i = m; i <= n; i += istep) {
j = i + mmax;
tempr = wr*data[j] - wi * data[j+1];
tempi = wr*data[j+1] + wi * data[j];
data[j] = data[i] - tempr;
data[j+1] = data[i+1] - tempi;
data[i] += tempr;
data[i+1] += tempi;
}
wr = *(ptwr++);
wi = *(ptwi++);
}
mmax = istep;
}
}
#undef SWAP
void real_fft(float data[], int n, int isign)
{
int i, i1, i2, i3, i4, np3;
float c1=0.5, c2, h1r, h1i, h2r, h2i;
double wr, wi, wpr, wpi, wtemp, theta;
theta = 3.141592653589793 / (double) (n>>1);
if (isign == 1) {
c2 = -0.5;
fft(data, n>>1, 1);
} else {
c2 = 0.5;
theta = -theta;
}
wtemp = sin(0.5 * theta);
wpr = -2.0 * wtemp * wtemp;
wpi = sin(theta);
wr = 1.0 + wpr;
wi = wpi;
np3 = n+3;
for (i = 2;i <= (n>>2); i++) {
i4 = 1 + (i3 = np3 - (i2 = 1 + (i1 = i + i - 1)));
h1r = c1 * (data[i1] + data[i3]);
h1i = c1 * (data[i2] - data[i4]);
h2r = -c2 * (data[i2] + data[i4]);
h2i = c2 * (data[i1] - data[i3]);
data[i1] = h1r + wr * h2r - wi * h2i;
data[i2] = h1i + wr * h2i + wi * h2r;
data[i3] = h1r - wr * h2r + wi * h2i;
data[i4] = -h1i + wr * h2i + wi * h2r;
wr = (wtemp = wr) * wpr - wi * wpi + wr;
wi = wi * wpr + wtemp * wpi + wi;
}
if (isign == 1) {
data[1] = (h1r = data[1]) + data[2];
data[2] = h1r - data[2];
} else {
data[1] = c1 * ((h1r = data[1]) + data[2]);
data[2] = c1 * (h1r - data[2]);
fft(data, n>>1, -1);
}
}
/* graphic file handling routines */
/* skip whitespace and comments in PGM/PPM headers */
void skip_white(FILE *f)
{
int c;
do {
while (isspace(c = getc(f)));
if (c == '#')
while ((c = getc(f)) != '\n' && c != EOF);
else {
ungetc(c, f);
return;
}
} while (c != EOF);
return;
}
/* read header of input file */
int g_openr(G_FILE *gf)
{
char magic[10];
int vmax;
if (gf->f == NULL) abort();
magic[0] = fgetc(gf->f);
magic[1] = fgetc(gf->f);
if (magic[0] == 'P' && isdigit(magic[1])) {
/* PGM or PPM */
fgets(magic + 2, 10, gf->f);
if (!strcmp(magic, "P5\n")) gf->type = PGM_raw;
else if (!strcmp(magic, "P6\n")) gf->type = PPM_raw;
else {
fprintf(stderr, "Unsupported input file type!\n");
return(1);
}
skip_white(gf->f);
fscanf(gf->f, "%d", &gf->xmax);
skip_white(gf->f);
fscanf(gf->f, "%d", &gf->ymax);
skip_white(gf->f);
fscanf(gf->f, "%d", &vmax);
getc(gf->f);
} else {
fprintf(stdin, "Unsupported input file type!\n");
return(1);
}
return 0;
}
/* write header of output file */
void g_openw(G_FILE *gf)
{
switch (gf->type) {
case PGM_raw:
fprintf(gf->f, "P5\n# %s%d %d\n%d\n", version,
gf->xmax, gf->ymax, 255);
break;
case PPM_raw:
fprintf(gf->f, "P6\n# ANTISKY -- Markus Kuhn\n%d %d\n%d\n",
gf->xmax, gf->ymax, 255);
break;
default:
abort();
}
return;
}
/* read a single image line and store it as RGB values */
void g_read_rgb(RGB_line *line, G_FILE *gf)
{
int i;
switch (gf->type) {
case PPM_raw:
if (line)
fread(line, 3, gf->xmax, gf->f);
else
for (i = 0; i < 3 * gf->xmax; i++)
getc(gf->f);
break;
case PGM_raw:
if (line)
for (i = 0; i < gf->xmax; i++)
line[i][0] = line[i][1] = line[i][2] = getc(gf->f);
else
for (i = 0; i < gf->xmax; i++)
getc(gf->f);
break;
default:
abort();
}
return;
}
/* write a single line of RGB values to a file */
void g_write_rgb(RGB_line *line, int offset, G_FILE *gf)
{
int i, v;
RGB_line *p;
switch (gf->type) {
case PPM_raw:
fwrite(line + offset, 3, gf->xmax - offset, gf->f);
fwrite(line, 3, offset, gf->f);
break;
case PGM_raw:
p = line + offset;
for (i = 0; i < gf->xmax; i++) {
v = (3 * (int) (*p)[0] +
6 * (int) (*p)[1] +
1 * (int) (*p)[2]) / 10;
if (v < 0) v = 0;
if (v > 255) v = 255;
putc(v, gf->f);
if (p++ == (line + gf->xmax)) p = line;
}
break;
default:
abort();
}
return;
}
int main(int argc, char **argv)
{
char *fnin = NULL, *fnout = NULL;
int line, lines, linecount = 0, i, j, k, x, fftsize, lumres;
RGB_line **rgb;
float *lum[2]; /* luminance signal of this and previous line */
float *f[2]; /* fourier transformed lum */
float *cc; /* cross correlation between this and previous line */
#ifndef USE_FFT
float *p, sum;
#else
float dum;
#endif
float maxsum = 0;
int last = 1, new = 0;
G_FILE gin, gout;
int *offset, *hist;
signed char **dir; /* direction vectors for dynamic programming alg. */
int *val[2]; /* bonus values for dyn. prog. algorithm */
int mindist; /* this is deadzone in lumres pixels */
/* parameters */
int from_stdin = 0;
int rmargin = 0, lmargin = 0;
int halfframe = 0; /* 1: only first, 2: only second, 0: both halfframes */
int verbose = 0;
int markbreak = 0;
int mark_edge = 0;
int corr_edge = 1;
int max_edge_bias = 4;
int orig_fmt = 0;
int resolution = 1;
int show_deadzone = 0;
float deadzone = 0.12; /* this multiplied with the image width is the
* minimum distance between the real image's
* left/right border and any cutting-point */
/* read command line arguments */
gin.f = stdin;
gout.f = stdout;
for (i = 1; i < argc; i++) {
if (argv[i][0] == '/' && argv[i][1] == '?' && argv[i][2] == 0) {
fprintf(stderr, usage, version, argv[0]);
exit(1);
} else if (argv[i][0] == '-')
if (argv[i][1] == '\0') {
if (from_stdin) {
fprintf(stderr, usage, version, argv[0]);
exit(1);
}
from_stdin = 1;
} else
for (j = 1; j < 2000 && argv[i][j] != 0; j++)
switch (argv[i][j]) {
case 'r':
case 'R':
if (argv[i][j+1] != '\0') {
rmargin = atoi(argv[i] + j + 1);
j = 9999;
} else {
fprintf(stderr, usage, version, argv[0]);
exit(1);
}
break;
case 'l':
case 'L':
if (argv[i][j+1] != '\0') {
lmargin = atoi(argv[i] + j + 1);
j = 9999;
} else {
fprintf(stderr, usage, version, argv[0]);
exit(1);
}
break;
case 'b':
case 'B':
mark_edge = 1;
break;
case 'm':
case 'M':
markbreak = 1;
break;
case 'd':
case 'D':
show_deadzone = 1;
break;
case 'c':
case 'C':
corr_edge = 0;
break;
case 'o':
case 'O':
orig_fmt = 1;
break;
case 'f':
case 'F':
resolution *= 2;
break;
case '0':
case '1':
case '2':
halfframe = argv[i][j] - '0';
break;
case 'v':
case 'V':
fprintf(stderr, "%s\n", version);
verbose = 1;
break;
default:
fprintf(stderr, usage, version, argv[0]);
exit(1);
}
else if (gin.f == stdin && !from_stdin) {
fnin = argv[i];
gin.f = fopen(fnin, "rb");
if (gin.f == NULL) {
fprintf(stderr, "Can't open input file '%s", fnin);
perror("'");
exit(1);
}
} else if (gout.f == stdout) {
fnout = argv[i];
gout.f = fopen(fnout, "wb");
if (gout.f == NULL) {
fprintf(stderr, "Can't open output file '%s", fnout);
perror("'");
exit(1);
}
} else {
fprintf(stderr, usage, version, argv[0]);
exit(1);
}
}
if (g_openr(&gin)) {
if (fnin == NULL)
fprintf(stderr, "Can't read file from stdin!\n");
else
fprintf(stderr, "Can't read input file '%s'!\n", fnin);
exit(1);
}
gout.xmax = gin.xmax - lmargin - rmargin;
gout.ymax = gin.ymax;
gout.type = orig_fmt ? gin.type : PGM_raw;
lines = gin.ymax;
if (halfframe) {
lines = gin.ymax / 2;
gout.ymax = 2 * lines;
}
g_openw(&gout);
lumres = gout.xmax / resolution;
fftsize = 2;
for (i = lumres - 1; i > 0; i >>= 1) fftsize <<= 1;
mindist = gout.xmax * deadzone / resolution;
/* allocate buffers */
rgb = (RGB_line **) checkedmalloc(sizeof (RGB_line *) * lines);
dir = (signed char **) checkedmalloc(sizeof (signed char *) * lines);
for (i = 0; i < lines; i++) {
rgb[i] = (RGB_line *) checkedmalloc(sizeof (RGB_line) * gin.xmax);
dir[i] = (signed char *) checkedmalloc(sizeof (signed char) * lumres);
}
offset = (int *) checkedmalloc(sizeof (int) * lines);
f[last] = (float *) checkedmalloc(sizeof (float) * fftsize);
f[new] = (float *) checkedmalloc(sizeof (float) * fftsize);
cc = (float *) checkedmalloc(sizeof (float) * fftsize);
lum[last] = (float *) checkedmalloc(sizeof (float) * 2 * lumres);
lum[new] = (float *) checkedmalloc(sizeof (float) * 2 * lumres);
hist = (int *) checkedmalloc(sizeof (int) * lumres);
val[last] = (int *) checkedmalloc(sizeof (int) * lumres);
val[new] = (int *) checkedmalloc(sizeof (int) * lumres);
for (x = 0; x < lumres; x++) {
hist[x] = 0;
val[last][x] = 0;
val[new][x] = 0;
}
/* load source image into rgb */
if (verbose)
fprintf(stderr, "loading data ...\n");
for (line = 0; line < lines; line++) {
if (halfframe == 2) g_read_rgb(NULL, &gin);
g_read_rgb(rgb[line], &gin);
if (halfframe == 1) g_read_rgb(NULL, &gin);
}
#if 0
/* correct PAL color distortion -- this didn't work */
float y, *u, *v; /* PAL color components */
u = (float *) checkedmalloc(sizeof (float) * gin.xmax);
v = (float *) checkedmalloc(sizeof (float) * gin.xmax);
if (lines > 0)
for (x = 0; x < gin.xmax; x++) {
u[i] = (- 0.147 * (float) rgb[0][x][0]
- 0.288 * (float) rgb[0][x][1]
+ 0.436 * (float) rgb[0][x][2]);
v[i] = (+ 0.615 * (float) rgb[0][x][0]
- 0.514 * (float) rgb[0][x][1]
- 0.101 * (float) rgb[0][x][2]);
}
for (line = 1; line < lines; line++) {
for (x = 0; x < gin.xmax; x++) {
y = (+ 0.299 * (float) rgb[line][x][0]
+ 0.587 * (float) rgb[line][x][1]
+ 0.114 * (float) rgb[line][x][2]);
u[x] = 2 * (- 0.147 * (float) rgb[line][x][0]
- 0.288 * (float) rgb[line][x][1]
+ 0.436 * (float) rgb[line][x][2]) - u[x];
v[x] = 2 * (+ 0.615 * (float) rgb[line][x][0]
- 0.514 * (float) rgb[line][x][1]
- 0.101 * (float) rgb[line][x][2]) - v[x];
/* red */
k = 1.14 * v[x] + y + 0.49;
if (k < 0) k = 0; else if (k > 255) k = 255;
rgb[line][x][0] = k;
k = - 0.58 * v[x] - 0.39 * u[x]+ y + 0.49;
if (k < 0) k = 0; else if (k > 255) k = 255;
rgb[line][x][1] = k;
k = 2.04 * u[x] + y + 0.49;
if (k < 0) k = 0; else if (k > 255) k = 255;
rgb[line][x][2] = k;
}
}
#endif
/* main decryption loop */
offset[0] = 0;
init_fft(fftsize);
for (line = 0; line < lines; line++) {
for (i = 0; i < lumres; i++) {
lum[new][i] = 0;
for (j = 0; j < resolution; j++) {
x = i * resolution + j + lmargin;
if (x >= 0)
lum[new][i] +=
(0.299 * (float) rgb[line][x][0] +
0.587 * (float) rgb[line][x][1] +
0.114 * (float) rgb[line][x][2]);
}
lum[new][i] /= resolution;
}
#ifdef USE_FFT
/* transform luminance of each line into the frequency domain
* using the fast fourier transform */
for (i = lumres; i < fftsize; i++)
f[new][i] = 0.0; /* zero padding */
memcpy(f[new], lum[new], sizeof(float) * lumres);
real_fft(f[new] - 1, fftsize, 1); /* FFT */
#endif
if (line) {
#ifdef USE_FFT
/* compute cross-correlation using real FFT convolution */
/* complex multiplication in the frequency domain is
equivalent to convolution in time domain */
f[last][0] *= f[new][0]; /* DC part and sample-frequency/2 */
f[last][1] *= f[new][1]; /* are always real */
for (i = 2; i < fftsize; i += 2) {
/* multiplication of complex numbers */
dum = f[last][i];
f[last][i ] = f[last][i] * f[new][i] + f[last][i+1] * f[new][i+1];
f[last][i+1] = dum * f[new][i+1] - f[last][i+1] * f[new][i];
}
real_fft(f[last] - 1, fftsize, -1); /* inverse transform */
/* find peak in cross-correlation */
maxsum = -1e20;
for (x = 0; x < lumres; x++)
if (f[last][x] + f[last][fftsize - lumres + x] > maxsum) {
maxsum = f[last][x] + f[last][fftsize - lumres + x];
offset[line] = x;
}
#else
/*
* Simple two-loop cross-correlation. Mathematically equivalent to
* above FFT code, but slower. This is only left in here for
* testing and in order to help explaining the algorithm to
* people who are not familiar with fourier transform, convolution, etc.
*/
maxsum = -1e20;
for (x = 0; x < lumres; x++) {
sum = 0.0;
p = lum[new] + x;
for (i = 0; i < lumres - x; i++)
sum += lum[last][i] * *(p++);
p = lum[new];
for (; i < lumres; i++)
sum += lum[last][i] * *(p++);
if (sum > maxsum) maxsum = sum, offset[line] = x;
}
#endif
offset[line] = (offset[line] + offset[line - 1]) % lumres;
if (corr_edge || mark_edge) {
/* dynamic programming edge search, part 1 */
for (i = 0; i < lumres; i++) {
val[new][i] = 0;
j = i - max_edge_bias;
if (j < 0) j += lumres;
for (x = -max_edge_bias; x <= max_edge_bias; x++) {
k = x; /* k is the penalty for drifting */
if (k < 0) k = -k;
if (k == 1) k = 0;
k *= 2 * resolution;
if (val[last][j]-k > val[new][i]) {
val[new][i] = val[last][j]-k;
dir[line][i] = x;
}
if (++j == lumres) j = 0;
}
k = offset[line] + i;
if (k >= lumres) k -= lumres;
/* silly edge detector: */
j = (+ lum[new][k - 1 + ((k-1 < 0) ? lumres : 0)]
+ lum[new][k]
- lum[new][k + 1 - ((k+1 >= lumres) ? lumres : 0)]
- lum[new][k + 2 - ((k+2 >= lumres) ? lumres : 0)]);
if (j < 0) j = -j;
if (j > 10) j = 10 + (j-10)/4;
val[new][i] += j;
/* don't allow the edge to get too near (< mindist) to a cutting
* point except in the first or last few lines */
if ((k < mindist || k >= lumres - mindist) &&
line >= 3 && line < lines - 3) {
val[new][i] -= 1000;
if (show_deadzone) rgb[line][k * resolution][0] = 255;
}
}
}
}
if (verbose)
fprintf(stderr, "line %3d: offset = %3d (%.0f)\n",
linecount + ((halfframe == 2) ? 2 : 1), offset[line], maxsum);
last ^= 1; /* exchange buffers */
new ^= 1;
linecount += (halfframe ? 2 : 1);
}
if (corr_edge || mark_edge) {
/* edge search, part 2 */
maxsum = -1;
for (i = 0; i < lumres; i++) /* search starting point */
if (val[last][i] > maxsum) x = i, maxsum = val[last][i];
for (i = lines - 1; i >= 0; i--) {
j = offset[i] + x;
if (j > lumres) j -= lumres;
if (j < 0) j += lumres;
if (mark_edge) {
rgb[i][j * resolution][0] = 0;
rgb[i][j * resolution][1] = 255;
rgb[i][j * resolution][2] = 0;
}
if (corr_edge) offset[i] = j;
x += dir[i][x];
if (x > lumres) x -= lumres;
if (x < 0) x += lumres;
}
}
/* write output image */
if (verbose)
fprintf(stderr, "writing data ...\n");
for (line = 0; line < lines; line++) {
if (markbreak) {
hist[offset[line]]++;
rgb[line][0][0] = rgb[line][0][1] = rgb[line][0][2] = 0;
rgb[line][gout.xmax-1][0] = 255;
rgb[line][gout.xmax-1][1] = 255;
rgb[line][gout.xmax-1][2] = 255;
}
g_write_rgb(rgb[line] + lmargin, offset[line] * resolution, &gout);
if (halfframe)
g_write_rgb(rgb[line] + lmargin, offset[line] * resolution, &gout);
}
fclose(gout.f);
return 0;
}