Update: UltraVNC 1.4.3.6 and UltraVNC SC 1.4.3.6: https://forum.uvnc.com/viewtopic.php?t=37885
Important: Please update to latest version before to create a reply, a topic or an issue: https://forum.uvnc.com/viewtopic.php?t=37864
Join us on social networks and share our announcements:
- Website: https://uvnc.com/
- GitHub: https://github.com/ultravnc
- Mastodon: https://mastodon.social/@ultravnc
- Facebook: https://www.facebook.com/ultravnc1
- X/Twitter: https://x.com/ultravnc1
- Reddit community: https://www.reddit.com/r/ultravnc
- OpenHub: https://openhub.net/p/ultravnc
Important: Please update to latest version before to create a reply, a topic or an issue: https://forum.uvnc.com/viewtopic.php?t=37864
Join us on social networks and share our announcements:
- Website: https://uvnc.com/
- GitHub: https://github.com/ultravnc
- Mastodon: https://mastodon.social/@ultravnc
- Facebook: https://www.facebook.com/ultravnc1
- X/Twitter: https://x.com/ultravnc1
- Reddit community: https://www.reddit.com/r/ultravnc
- OpenHub: https://openhub.net/p/ultravnc
A quick, easy and more redialable compresion system
A quick, easy and more redialable compresion system
Sugesting a best compresion metod. Is very easy of implement.
The screen images are very spetial image type.
Y make one easy test. Capture 50 images of screen and
compress in several image formats (use microsoft photo
editor).
The results are very similar for all images. I put one
for example:
JPG 24 bits Quality 40: 131 kb
JPG 24 bits Quality 1: 45 kb (very poor image)
GIF (LZW) 256 colors: 104 Kb
PNG 24 bits (FILTER+LZ77): 51 KB (lostless quality)
The Best are PNG.
I suggest use PNG option for compresion. (using a open
source PNG library)
Or use the PNG filter system (filter 1 and 2)
http://www.libpng.org/pub/png/spec/1.2/PNG-Filters.html
The screen images are very spetial image type.
Y make one easy test. Capture 50 images of screen and
compress in several image formats (use microsoft photo
editor).
The results are very similar for all images. I put one
for example:
JPG 24 bits Quality 40: 131 kb
JPG 24 bits Quality 1: 45 kb (very poor image)
GIF (LZW) 256 colors: 104 Kb
PNG 24 bits (FILTER+LZ77): 51 KB (lostless quality)
The Best are PNG.
I suggest use PNG option for compresion. (using a open
source PNG library)
Or use the PNG filter system (filter 1 and 2)
http://www.libpng.org/pub/png/spec/1.2/PNG-Filters.html
- Rudi De Vos
- Admin & Developer
- Posts: 6863
- Joined: 2004-04-23 10:21
- Contact:
1) You need the spped to compress /decrompress
zlib is faster then bzip because calculations take to much time on server and viewer
update=compress+send+decompression
50ms + 50ms +10 ms -->110ms
100ms+20ms+60ms -->180ms==better compression but slower
2) Vnc use compound compression.
The screen is deviced in
-single color regions (send as rect+color)
-a few colors (background+foreground+data)
-full color...
Even png will not be smaller the 1 or 2 colors.
Pure Png will not work, you need to use it in combination with the other methods.
Like Tight+Png instead of tight+jpeg
Any com/decompression times available compared to jpeg and zlib ?
zlib is faster then bzip because calculations take to much time on server and viewer
update=compress+send+decompression
50ms + 50ms +10 ms -->110ms
100ms+20ms+60ms -->180ms==better compression but slower
2) Vnc use compound compression.
The screen is deviced in
-single color regions (send as rect+color)
-a few colors (background+foreground+data)
-full color...
Even png will not be smaller the 1 or 2 colors.
Pure Png will not work, you need to use it in combination with the other methods.
Like Tight+Png instead of tight+jpeg
Any com/decompression times available compared to jpeg and zlib ?
I like vncviewer's MAN page (from TightVNC) about encodings....
ENCODINGS
The server supplies information in whatever format is desired by the
client, in order to make the client as easy as possible to implement.
If the client represents itself as able to use multiple formats, the
server will choose one.
Pixel format refers to the representation of an individual pixel. The
most common formats are 24 and 16 bit "true-color" values, and 8-bit
"color map" representations, where an arbitrary map converts the color
number to RGB values.
Encoding refers to how a rectangle of pixels are sent (all pixel infor-
mation in VNC is sent as rectangles). All rectangles come with a header
giving the location and size of the rectangle and an encoding type used
by the data which follows. These types are listed below.
Raw
The raw encoding simply sends width*height pixel values. All
clients are required to support this encoding type. Raw is also
the fastest when the server and viewer are on the same machine,
as the connection speed is essentially infinite and raw encoding
minimizes processing time.
CopyRect
The Copy Rectangle encoding is efficient when something is being
moved; the only data sent is the location of a rectangle from
which data should be copied to the current location. Copyrect
could also be used to efficiently transmit a repeated pattern.
RRE
The Rise-and-Run-length-Encoding is basically a 2D version of
run-length encoding (RLE). In this encoding, a sequence of iden-
tical pixels are compressed to a single value and repeat count.
In VNC, this is implemented with a background color, and then
specifications of an arbitrary number of subrectangles and color
for each. This is an efficient encoding for large blocks of con-
stant color.
CoRRE
This is a minor variation on RRE, using a maximum of 255x255
pixel rectangles. This allows for single-byte values to be used,
reducing packet size. This is in general more efficient, because
the savings from sending 1-byte values generally outweighs the
losses from the (relatively rare) cases where very large regions
are painted the same color.
Hextile
Here, rectangles are split up in to 16x16 tiles, which are sent
in a predetermined order. The data within the tiles is sent
either raw or as a variant on RRE. Hextile encoding is usually
the best choice for using in high-speed network environments
(e.g. Ethernet local-area networks).
Zlib
Zlib is a very simple encoding that uses zlib library to com-
press raw pixel data. This encoding achieves good compression,
but consumes a lot of CPU time. Support for this encoding is
provided for compatibility with VNC servers that might not
understand Tight encoding which is more efficient than Zlib in
nearly all real-life situations.
Tight
Like Zlib encoding, Tight encoding uses zlib library to compress
the pixel data, but it pre-processes data to maximize compres-
sion ratios, and to minimize CPU usage on compression. Also,
JPEG compression may be used to encode color-rich screen areas
(see the description of -quality and -nojpeg options above).
Tight encoding is usually the best choice for low-bandwidth net-
work environments (e.g. slow modem connections).
ENCODINGS
The server supplies information in whatever format is desired by the
client, in order to make the client as easy as possible to implement.
If the client represents itself as able to use multiple formats, the
server will choose one.
Pixel format refers to the representation of an individual pixel. The
most common formats are 24 and 16 bit "true-color" values, and 8-bit
"color map" representations, where an arbitrary map converts the color
number to RGB values.
Encoding refers to how a rectangle of pixels are sent (all pixel infor-
mation in VNC is sent as rectangles). All rectangles come with a header
giving the location and size of the rectangle and an encoding type used
by the data which follows. These types are listed below.
Raw
The raw encoding simply sends width*height pixel values. All
clients are required to support this encoding type. Raw is also
the fastest when the server and viewer are on the same machine,
as the connection speed is essentially infinite and raw encoding
minimizes processing time.
CopyRect
The Copy Rectangle encoding is efficient when something is being
moved; the only data sent is the location of a rectangle from
which data should be copied to the current location. Copyrect
could also be used to efficiently transmit a repeated pattern.
RRE
The Rise-and-Run-length-Encoding is basically a 2D version of
run-length encoding (RLE). In this encoding, a sequence of iden-
tical pixels are compressed to a single value and repeat count.
In VNC, this is implemented with a background color, and then
specifications of an arbitrary number of subrectangles and color
for each. This is an efficient encoding for large blocks of con-
stant color.
CoRRE
This is a minor variation on RRE, using a maximum of 255x255
pixel rectangles. This allows for single-byte values to be used,
reducing packet size. This is in general more efficient, because
the savings from sending 1-byte values generally outweighs the
losses from the (relatively rare) cases where very large regions
are painted the same color.
Hextile
Here, rectangles are split up in to 16x16 tiles, which are sent
in a predetermined order. The data within the tiles is sent
either raw or as a variant on RRE. Hextile encoding is usually
the best choice for using in high-speed network environments
(e.g. Ethernet local-area networks).
Zlib
Zlib is a very simple encoding that uses zlib library to com-
press raw pixel data. This encoding achieves good compression,
but consumes a lot of CPU time. Support for this encoding is
provided for compatibility with VNC servers that might not
understand Tight encoding which is more efficient than Zlib in
nearly all real-life situations.
Tight
Like Zlib encoding, Tight encoding uses zlib library to compress
the pixel data, but it pre-processes data to maximize compres-
sion ratios, and to minimize CPU usage on compression. Also,
JPEG compression may be used to encode color-rich screen areas
(see the description of -quality and -nojpeg options above).
Tight encoding is usually the best choice for low-bandwidth net-
work environments (e.g. slow modem connections).
Re: A quick, easy and more redialable compresion system
Png is good for images with repetitive patterns, jpg is good for photo.Jose Manuel wrote:Sugesting a best compresion metod. Is very easy of implement.
The screen images are very spetial image type.
Y make one easy test. Capture 50 images of screen and
compress in several image formats (use microsoft photo
editor).
The results are very similar for all images. I put one
for example:
JPG 24 bits Quality 40: 131 kb
JPG 24 bits Quality 1: 45 kb (very poor image)
GIF (LZW) 256 colors: 104 Kb
PNG 24 bits (FILTER+LZ77): 51 KB (lostless quality)
The Best are PNG.
I suggest use PNG option for compresion. (using a open
source PNG library)
Or use the PNG filter system (filter 1 and 2)
http://www.libpng.org/pub/png/spec/1.2/PNG-Filters.html
a 1600x1200 photo saved in png can vary from 5 to 10mb, saved in jpeg it can become 500kb with 95% quality.
when the image contains repetitive patterns it's the inverse, png becomes very little and jpeg becomes very large compared to png
it would be nice to have an encoding format that divides the screen in squares and then determines for each square which one is the best format between those two, for example by watching the number of different colors in the same square.
btw what about supporting jpeg2000?
Last edited by OhMyGoat on 2005-05-30 07:56, edited 1 time in total.
Importantly, a desktop wallpaper is more likely a full-color photo image than a simple patterns, nowadays.
And unlike ever, the situations where VNC is used is being expanded wide and wider.
When on VNC you may browse web pages with a bunch of images, you would surely like the JPEG way, which in average can present these images clearer and faster.
OhMyGoat,
I'm afraid JPEG2000 is too "expencive" because it requires much more computation than JPEG when the compressed data size becomes only 1/2 or so...
And unlike ever, the situations where VNC is used is being expanded wide and wider.
When on VNC you may browse web pages with a bunch of images, you would surely like the JPEG way, which in average can present these images clearer and faster.
OhMyGoat,
I'm afraid JPEG2000 is too "expencive" because it requires much more computation than JPEG when the compressed data size becomes only 1/2 or so...
Lizard
- Rudi De Vos
- Admin & Developer
- Posts: 6863
- Joined: 2004-04-23 10:21
- Contact:
What we atual need is a good and fast algorithm that can devide an
update in regions.
Text --> lossless compression to have a clear text
solid regions -> rect+color, nothing is smaller
images
Regions may not be to fragmentated, because each region create
a location overhead and compression on to small regions is bad.
I don't think the image encoder would make the big difference because
each encoder has it own chars.
To get smaller data streams, a better caching would possible do more.
Just prevent data from being send, then spend time on compression.
Please check this excelent study
http://www.cis.pku.edu.cn/teacher/visio ... script.pdf
update in regions.
Text --> lossless compression to have a clear text
solid regions -> rect+color, nothing is smaller
images
Regions may not be to fragmentated, because each region create
a location overhead and compression on to small regions is bad.
I don't think the image encoder would make the big difference because
each encoder has it own chars.
To get smaller data streams, a better caching would possible do more.
Just prevent data from being send, then spend time on compression.
Please check this excelent study
http://www.cis.pku.edu.cn/teacher/visio ... script.pdf
Compound Image Compression for Real-Time Computer Screen Image Transmission
good and fast algorithm
SPEC = Shape Primitive Extraction and Coding
from white paper I was remember quality/performance in real time
One 800×600 true color screen image has a size of 1.44 MB
=
For typical 800×600 computer screen images, the SPEC coded file sizes are less than 100 KB
Well,
with this beautiful job and time consuming research can be implemented in UltraVNC V1.x or V2 and all VNC flavor can use it ?
good and fast algorithm
SPEC = Shape Primitive Extraction and Coding
from white paper I was remember quality/performance in real time
One 800×600 true color screen image has a size of 1.44 MB
=
For typical 800×600 computer screen images, the SPEC coded file sizes are less than 100 KB
Well,
with this beautiful job and time consuming research can be implemented in UltraVNC V1.x or V2 and all VNC flavor can use it ?
UltraVNC 1.0.9.6.1 (built 20110518)
OS Win: xp home + vista business + 7 home
only experienced user, not developer
OS Win: xp home + vista business + 7 home
only experienced user, not developer
- Rudi De Vos
- Admin & Developer
- Posts: 6863
- Joined: 2004-04-23 10:21
- Contact:
well I think that the best algorithm would be to analize each window to find out which controls are inside it and sinthetizing infos about them (this would mean not having to transfer images anymore except for picture boxes and other small cases. there are plenty of windows skinning software again that just change the appearance of windows control, maybe their code would be reusable to get infos about all the visible controls and redrawing them locally (pratically it would act like a windows skinning tool that instead of drawing the skins over the controls just draws them on another computer ). I think ms remote desktop software works that way. Also each control has it's space region, if it can't be supported because it's a custom control or because it is not being supported by the vnc server it can just be sent to the other computer by traditional ways.
this would be a very fast and efficient way to solve the bandwith problem
this would be a very fast and efficient way to solve the bandwith problem
Last edited by OhMyGoat on 2005-05-30 22:34, edited 3 times in total.
http://www.cis.pku.edu.cn/teacher/visio ... o/SPEC.htmRudi De Vos wrote:Anybody know if there is C code is available...
need extract algorithm from this (4) Windows Demo: here !!
http://www.cis.pku.edu.cn/teacher/visio ... inDemo.zip
hope this help ?
UltraVNC 1.0.9.6.1 (built 20110518)
OS Win: xp home + vista business + 7 home
only experienced user, not developer
OS Win: xp home + vista business + 7 home
only experienced user, not developer
Thought I should clarify this.Rudi De Vos wrote:1) You need the spped to compress /decrompress
zlib is faster then bzip because calculations take to much time on server and viewer
update=compress+send+decompression
50ms + 50ms +10 ms -->110ms
100ms+20ms+60ms -->180ms==better compression but slower
There are two different ways of measuring update speed which will have varying importance to different uses.
The first, generally referred to as latency or lag, is the time from when you do something with the mouse or keyboard to when the screen updates to reflect what you did. This is the total time of sending the commands to the server, reading the screen or the server, compressing, sending the display back to the client, decompressing, and displaying.
i.e. sending input+reading display+compression+sending display+decompression+display=lag
say,
10ms+5ms+30ms+100ms+10ms+5ms=160ms
10ms+5ms+70ms+70ms+40ms+5ms=200ms
The second method of measuring update speed measures the frequency of the updates. This is basically governed by the slowest link in the chain, which is usually either compression or sending display. None of the processes tie up earlier processes so, for example, the server can be compressing one frame while it is still sending the previous.
So in the above examples:
In the first example the slowest part, sending display, took 100ms - so it could update at most 10 times a second
In the second example the slowest part was 70ms - so it could update at most 14.3 times a second.
In terms of 'user experience', the lag time dictates the responsiveness of the system, whereas the update frequency dictates the smoothness. Both are important, so any compression scheme must be designed to not only minimise the total time, but also balance the tradeoff between compression time and compression ratio so as to not reduce the update frequency
Hello, Guest user!
hm... sounds tricky, but yes, i kinda agree on the idea.
and just to let you know, Tight encoding seems to have been designed in somewhat similar concept. maximizing network efficiency.
but think, obviously you'll never want a situation like ( sending display < decompression ) happen since it'll cause vncviewer.exe's gigantic & monopolistic CPU load and massive screen update delay at once.
it's just my little opinion that CPU-side allotment in the update cycle should rather stay less, since the users, while controling another PC over VNC, might want to also talk on VoIP or listen to music etc...
well, still good point you're based on... CPU requirement should be less and so should the network bandwidth. it sounds like a balance issue.
so when there's undiscovered efficient encoding method which beats Tight in the full-color data size, ZRLE in the pseudo-color data size or ZlibHex in the comp/decomp speed, i think it's definitely got the place.
thanks
hm... sounds tricky, but yes, i kinda agree on the idea.
and just to let you know, Tight encoding seems to have been designed in somewhat similar concept. maximizing network efficiency.
but think, obviously you'll never want a situation like ( sending display < decompression ) happen since it'll cause vncviewer.exe's gigantic & monopolistic CPU load and massive screen update delay at once.
it's just my little opinion that CPU-side allotment in the update cycle should rather stay less, since the users, while controling another PC over VNC, might want to also talk on VoIP or listen to music etc...
well, still good point you're based on... CPU requirement should be less and so should the network bandwidth. it sounds like a balance issue.
so when there's undiscovered efficient encoding method which beats Tight in the full-color data size, ZRLE in the pseudo-color data size or ZlibHex in the comp/decomp speed, i think it's definitely got the place.
thanks
Last edited by lizard on 2005-06-19 09:26, edited 1 time in total.
Lizard
Sorry, i should have put my name in there...it is so rare these days to find a forum which allows posting without registering...
i don't think the chances of sending display<decompression is very likely simply because decompression routines are generally much faster (2-10 times) than compression. On the other hand, the server may be a much faster computer than the client.
It is generally acceptable (but obviously not desirable) for the client to have a high CPU load since it will usually be focussing just on decompression and display. The server however cannot let the load be too high since it also hs to deal with the task of running applications in a somewhat responsive manner.
My post btw, wasn't meant promote or criticise any of the encodings, but just to point out extra judging criteria that should be considered. I would still consider lag much more important than update frequency for most office or utility programs, which I imagine are the prime uses of UVNC. For anything involving animation or motion however, update frequency can become more important.
As for an undiscovered efficiant coding method, my only idea at this stage is to do the compression inside the GPU though an OpenGL/DirectX program. I have no idea how feasible this would be. Obviously the GPU instruction set is not optimised for this sort of thing, but the fact that it has fast direct access to the frame buffer without even needing to go thru a driver, and that the GPU is most of the time sitting around doing basically nothing, would suggest that it is worth looking into. It would be a huge undertaking though, probably requiring a complete rewrite of th compression routines just for an 'it might possibly be faster.'
Another idea that might help is focussing in on region of the active window, and delaying updates outside this region if there is a lot of activity inside. 1000ms updates outside the active window are probably acceptable most of the time if it means 2 or 3 times the updates inside the active window. The mouse cursor would also need to be tracked and the surrounding area added to this 'focus zone' when the mouse is outside the active window region.
Also, (these ideas are coming to me as I type) IIRC from last time i had a working VNC setup, using lossy compression tended to leave lots of compression artifacts on the screen, even when nothing was going on. UVNC should use its idle time when there is not much going on on the screen to correct the artifacts. This would be especially useful where text is involved.
Gah, just had another thought. I don't suppose UVNC supports shifting pixels, similar to how MPEG does? It would obviously greatly speed up scrolling if it could just say 'move this section up 50px, and then put these new pixels in the hole' rather than redrawing the whole area. This would slow down the compression though, because of the searching for moved areas. Maybe UVNC could have some sort of hooking detection of when a control is scrolled, and only then search for a shifted region
Ok, thats the end...i'll stop thinking now (at least for this time) If you've read this whole thing you're probably bored or confused by now, but maybe something I wrote will be useful
i don't think the chances of sending display<decompression is very likely simply because decompression routines are generally much faster (2-10 times) than compression. On the other hand, the server may be a much faster computer than the client.
It is generally acceptable (but obviously not desirable) for the client to have a high CPU load since it will usually be focussing just on decompression and display. The server however cannot let the load be too high since it also hs to deal with the task of running applications in a somewhat responsive manner.
My post btw, wasn't meant promote or criticise any of the encodings, but just to point out extra judging criteria that should be considered. I would still consider lag much more important than update frequency for most office or utility programs, which I imagine are the prime uses of UVNC. For anything involving animation or motion however, update frequency can become more important.
As for an undiscovered efficiant coding method, my only idea at this stage is to do the compression inside the GPU though an OpenGL/DirectX program. I have no idea how feasible this would be. Obviously the GPU instruction set is not optimised for this sort of thing, but the fact that it has fast direct access to the frame buffer without even needing to go thru a driver, and that the GPU is most of the time sitting around doing basically nothing, would suggest that it is worth looking into. It would be a huge undertaking though, probably requiring a complete rewrite of th compression routines just for an 'it might possibly be faster.'
Another idea that might help is focussing in on region of the active window, and delaying updates outside this region if there is a lot of activity inside. 1000ms updates outside the active window are probably acceptable most of the time if it means 2 or 3 times the updates inside the active window. The mouse cursor would also need to be tracked and the surrounding area added to this 'focus zone' when the mouse is outside the active window region.
Also, (these ideas are coming to me as I type) IIRC from last time i had a working VNC setup, using lossy compression tended to leave lots of compression artifacts on the screen, even when nothing was going on. UVNC should use its idle time when there is not much going on on the screen to correct the artifacts. This would be especially useful where text is involved.
Gah, just had another thought. I don't suppose UVNC supports shifting pixels, similar to how MPEG does? It would obviously greatly speed up scrolling if it could just say 'move this section up 50px, and then put these new pixels in the hole' rather than redrawing the whole area. This would slow down the compression though, because of the searching for moved areas. Maybe UVNC could have some sort of hooking detection of when a control is scrolled, and only then search for a shifted region
Ok, thats the end...i'll stop thinking now (at least for this time) If you've read this whole thing you're probably bored or confused by now, but maybe something I wrote will be useful
- Rudi De Vos
- Admin & Developer
- Posts: 6863
- Joined: 2004-04-23 10:21
- Contact:
Hello,
here's some of my recent thoughts about the encoding
*A simple encoding with JPEG and Transparent PNG*
this combination of JPEG and PNG will provide us best compression in both full-color and pseudo-color.
what we particularly have to implement is only the full-color area detection code.
full-color areas are going to be sent as JPEG rects first, then PNG with filled black(R/G/B:0/0/0) and 100% transparent in the JPEG area, follows.
PNG's internal compression will cancel the size overhead of sending these blank areas for JPEG and solid rects as in Tight encoding.
(we can still use LZO, if prefered, using no internal PNG compression)
there's no "subrects" related code like in Tight, so never worry about protocol overheads.
this is a very simple method and the data size sent in this way would probably be still as small as Tight, CPU load would likely be very low, guaranteed fast transmission.
I'm currently writing some conceptual code of this encoder.
*intelligent switching between Zlib/LZO*
a CPU load monitoring code implementation could help VNC to switch between faster/better compression methods.
it'd make the encoding to adapt both use of Internet and LAN, without any major modification.
*window-based caching*
with the "forced refresh problem while scroling," this "forced redrawing of windows when moving another around" is one of the last few factors that is still keeping VNC inferior to RDP in the speed.
so i claim about having additional caching system to the current one, window-based caching. (when not necesery) it would improve performance on moving windows around.
we should have at least the desktop window (which is always one of the biggest window that is never minimized) and if possible, a few foreground windows' GDI contents.
the problem here is as UtraSam said in some time before, mass memory usage along multiple framebuffers.
here's some of my recent thoughts about the encoding
*A simple encoding with JPEG and Transparent PNG*
this combination of JPEG and PNG will provide us best compression in both full-color and pseudo-color.
what we particularly have to implement is only the full-color area detection code.
full-color areas are going to be sent as JPEG rects first, then PNG with filled black(R/G/B:0/0/0) and 100% transparent in the JPEG area, follows.
PNG's internal compression will cancel the size overhead of sending these blank areas for JPEG and solid rects as in Tight encoding.
(we can still use LZO, if prefered, using no internal PNG compression)
there's no "subrects" related code like in Tight, so never worry about protocol overheads.
this is a very simple method and the data size sent in this way would probably be still as small as Tight, CPU load would likely be very low, guaranteed fast transmission.
I'm currently writing some conceptual code of this encoder.
*intelligent switching between Zlib/LZO*
a CPU load monitoring code implementation could help VNC to switch between faster/better compression methods.
it'd make the encoding to adapt both use of Internet and LAN, without any major modification.
*window-based caching*
with the "forced refresh problem while scroling," this "forced redrawing of windows when moving another around" is one of the last few factors that is still keeping VNC inferior to RDP in the speed.
so i claim about having additional caching system to the current one, window-based caching. (when not necesery) it would improve performance on moving windows around.
we should have at least the desktop window (which is always one of the biggest window that is never minimized) and if possible, a few foreground windows' GDI contents.
the problem here is as UtraSam said in some time before, mass memory usage along multiple framebuffers.
Lizard
lizard,
I can read, thre a futur beautiful work, greatly appreciated for any user level of UltraVNC
I'm very exciting to see your next build including your great effort of work for improving speed and quality of encoder.
Anyway, I encourage you for continue this very very good optimized encoder.
I can read, thre a futur beautiful work, greatly appreciated for any user level of UltraVNC
easy name: UltraTight ?I'm currently writing some conceptual code of this encoder.
I'm very exciting to see your next build including your great effort of work for improving speed and quality of encoder.
Anyway, I encourage you for continue this very very good optimized encoder.
UltraVNC 1.0.9.6.1 (built 20110518)
OS Win: xp home + vista business + 7 home
only experienced user, not developer
OS Win: xp home + vista business + 7 home
only experienced user, not developer
Bumping this thread again with stuff about the SPEC codec.
SPEC Hybrid Image Format Codec Sample w/ Tony Lin's codec implementation (276KB)
One of the reasons there's no 3.3-compatible VNCEncodeSPEC component
is because I'm on the side V2 code shall rather be overhauled.
Well, I'd like you guys to give it a go and see if it's got a place for new-coming encoder candidate.
SPEC Hybrid Image Format Codec Sample w/ Tony Lin's codec implementation (276KB)
One of the reasons there's no 3.3-compatible VNCEncodeSPEC component
is because I'm on the side V2 code shall rather be overhauled.
Well, I'd like you guys to give it a go and see if it's got a place for new-coming encoder candidate.
Lizard
- Rudi De Vos
- Admin & Developer
- Posts: 6863
- Joined: 2004-04-23 10:21
- Contact:
nope for this moment, redge :<redge wrote:Did you make your optimized UltraVNC with SPECencoder ?
any available exe except your source code for make it ?
there's still a few issues I haven't overcome to integrate the SPEC codec to VNC.
one of them is how to deal with restricted color modes.
there's a sample program named SmartZip provided by Lin himself, which shows encoding/decoding time of SPEC HIF image on processing time.Rudi De Vos wrote:What's the segmentation speed compared to compression ? Have you already made some benchmarks.
SPec with replaced jpeg (using ijl-ipp jpeg " implemented as entropy's" will ven be faster)
according to it the encoding time for an XGA full-color screenshot varies between 60~400ms w/o SIMD optimization on my AthlonXP 1700+ totally depending on JPEG amount, and the decompression speed is about the double of compression.
this means the SPEC codec is somewhat similar to Tight in the concept where "area detection," "GZIP," "JPEG" are done, but for this one the area detection takes very little time compared to the JPEG compression part, and considering other JPEG encoders, there's no doubt another drastic speed improvement can be made with IJL, as for the Tight we could see.
Lizard
LZO LIBRARY!?
it was already mentioned, but i couldn't find an answer for specific question in here or in the documentation...
why doesn't ultravnc use the LZO compression library?
i already made my own tests (for another project) and can approve what other tests on the net say: it's fast as hell!
IMO, the increase in speed will top the decrease in size... and even if not, there still can be a fallback method, for extremly slow connections...
AND the lzo library is coded in ansi c, so it should be no problem to integrate, right?
if interested, you can find the source and a lot of useful information on: http://www.oberhumer.com/opensource/lzo
(there also is a miniLZO library - 1 source file and 3 header files! maybe that already feeds our needs...)
sincerely,
chris
ps: the mars robot 'pathfinder' also uses oberhumers lzo library for compression...
why doesn't ultravnc use the LZO compression library?
i already made my own tests (for another project) and can approve what other tests on the net say: it's fast as hell!
IMO, the increase in speed will top the decrease in size... and even if not, there still can be a fallback method, for extremly slow connections...
i don't think that zlib would be really necessary anymore since lzo includes slower compression levels which achieve a quite competitive compression ratio while still decompressing at a very high speed!lizard wrote:*intelligent switching between Zlib/LZO*
a CPU load monitoring code implementation could help VNC to switch between faster/better compression methods.
AND the lzo library is coded in ansi c, so it should be no problem to integrate, right?
if interested, you can find the source and a lot of useful information on: http://www.oberhumer.com/opensource/lzo
(there also is a miniLZO library - 1 source file and 3 header files! maybe that already feeds our needs...)
sincerely,
chris
ps: the mars robot 'pathfinder' also uses oberhumers lzo library for compression...
Last edited by snobs on 2005-11-22 01:57, edited 2 times in total.
- Rudi De Vos
- Admin & Developer
- Posts: 6863
- Joined: 2004-04-23 10:21
- Contact:
The Lzo compression is already used for the "ultra-encoding".
It is only suitable for lan to lan...
Lzo is a candidate for a new encoder we are planning to make for V2.
Possible SPEC + lzo + intel jpeg ijl +zlib
The new encoder need to be bandwidth aware, 1 single encoder for
LAN and modem.
For VNC you need to measure
Compression time + transmission time + decoding time (==FPS)
For pure Lzo, the transmission time is 100x compression+decoding
It is only suitable for lan to lan...
Lzo is a candidate for a new encoder we are planning to make for V2.
Possible SPEC + lzo + intel jpeg ijl +zlib
The new encoder need to be bandwidth aware, 1 single encoder for
LAN and modem.
For VNC you need to measure
Compression time + transmission time + decoding time (==FPS)
For pure Lzo, the transmission time is 100x compression+decoding
Hi,
Tony Lin's SPEC HIF Decoder had a little bug that was sometimes causing access violation. so i'm here with some noob tweakaround about the problem. (the .lib was precompiled for those who(means myself ) requires OMF binary)
SPEC HIF(jpp+lzw) Codec +Quick Hack by Lizard (105KB)
Copyright: Mr. Tony Lin, Associate Professor of Pekin Univ. Beijin
also try this little shit that i wrote. it'll show you how fast the codec is. even tho this dumb thing keeps on refreshing the whole fullscreen like every second, feels rather like you're on oldschool VNC with polling since SPEC codec generates very small data. check out also that it's done without losing text area's solidness!
SPEC HIF Codec Sample Program (226KB)
Tony Lin's SPEC HIF Decoder had a little bug that was sometimes causing access violation. so i'm here with some noob tweakaround about the problem. (the .lib was precompiled for those who(means myself ) requires OMF binary)
SPEC HIF(jpp+lzw) Codec +Quick Hack by Lizard (105KB)
Copyright: Mr. Tony Lin, Associate Professor of Pekin Univ. Beijin
also try this little shit that i wrote. it'll show you how fast the codec is. even tho this dumb thing keeps on refreshing the whole fullscreen like every second, feels rather like you're on oldschool VNC with polling since SPEC codec generates very small data. check out also that it's done without losing text area's solidness!
SPEC HIF Codec Sample Program (226KB)
Lizard
Does any one have SPEC sample source???
The link isn't work at all...
Can you share the souce???
Can you share the souce???
Re: A quick, easy and more redialable compresion system
What is the least bandwidth intensive method of use for the current latest UltraVNC release? That is to say: Not the fastest one, but the method that saves bandwidth.
I'm on a HSDPA connection with very limited bandwidth. If I go over a certain monthly limit, I need to pay extra.
I'm on a HSDPA connection with very limited bandwidth. If I go over a certain monthly limit, I need to pay extra.
Re: A quick, easy and more redialable compresion system
how to use ipp with ultravnc server .. Please suggest