Thursday, 6 March 2014

A tale of false alarm by ConfigServer, CPanel and a hosting provider.


I'm responsible for a couple of CPanel/WHM managed dedicated servers.

We  keep them updated, and try to do as little customization as possible outside of what cPanel knows about. We enabled mod_proxy_fcgi and PHP-FPM, so we can use Apache 2.4 MPM Event for our fairly high traffic web site. It's a unfortunate that CPanel doesn't have this configuration available out of the box, but that's for another blog post.

Today early in the morning we got a message from our lfd daemon (a service installed by a free ConfigServer Security & Firewall CPanel plugin installed by our hosting provider):

The following list of files have FAILED the md5sum comparison test. This means that the file has been changed in some way. This could be a result of an OS update or application upgrade. If the change is unexpected it should be investigated:
/usr/bin/ghostscript: FAILED
/usr/bin/gs: FAILED

The funny thing is, nothing upgraded any RPM files in this time window, our /var/log/yum.log didn't mention any upgrades to ghostscript package that provides the /usr/bin/gs binary (/usr/bin/ghostscript is a symlink to gs), we have disabled automatic updates that can be initiated by the cpanel upcp --cron sciprt, but the system us regulagrly kept up to date manually with yum update.

I've reinstalled the package with yum reinstall ghostscript (ghostscript-8.70-19.el6.x86_64 was reinstalled)

and the binary size and md5sum changed like this:

before:
size: 19152 bytes
md5sum: c64b5016d94450b476148c31cfef61ff

after reinstall:
size: 6760 bytes
md5sum: 73db43e258c4b191757b7ba75a883321

This is what actually happened: Our managed hosting provider had apparently changed our setup to upgrade our system packages automatically (probably with best intentions due to recent gnutls issue). And prelinking seems to be enabled on our system, so when upcp (CPanel automatic upgrade cron script that runs periodically) executed /usr/local/cpanel/scripts/rpmup to upgrade system packages, it also did the prelinking step, adding extra prelinking stuff to our /usr/bin/gs binary.

Similar issue described here:

http://linsec.ca/blog/2012/01/23/rpm-v-and-prelinked-binaries/


Friday, 16 August 2013

Dota 2 Wine optimization for Intel GPUs

Dota 2 for Linux implements it's 3D engine by using a Direct3D to OpenGL translation layer called ToGL. I assume that this layer can be used in different ways, but for Dota 2 it seems to be used in a less than ideal way as documented previously here. In short, Dota for Linux compiles 11000 shaders on startup, compared to just 220 the Wine version does. This causes much higher memory usage (1.2 GB vs 2.6 GB) and start-up time (35 seconds vs 1:15 min).


With Wine we actually do get the source of their Direct3D to OpenGL layer called wined3d, since Wine is open source. It's funny, the stack used to run Windows version of Dota 2 is actually more open.

Since Dota 2 for Windows when run on Wine actually outperforms native Linux version in some important aspects, and it's framerate is just slightly less, I decided to take a look on improving its performance.

I've used a tool called apitrace to record a trace of a Dota 2 session with wine so I can analyze the OpenGL calls and look at driver performance warnings (INTEL_DEBUG=perf) with qapitrace.

I optimized two things:

1. Reduce the number of vs and ps constants checked

There were many calls to check values of VS (vertex shader) and PS (pixel shader, also called fragment shaders in OpenGL) constants each frame like this:

532550 glGetUniformLocationARB(programObj = 152, name = "vs_c[4095]") = -1

This function is called from shader_glsl_init_vs_uniform_locations() in glsl_shader.c in
wined3d.

It uses GL_MAX_VERTEX_UNIFORM_COMPONENTS_ARB, defined to be 4096 in #define MAX_UNIFORMS in Mesa source.

Dota 2 doesn't need so many uniforms, most checks return -1, and wined3d checks all of values for both VS
and PS uniforms.

I reduced this number to 256, just enough for Dota 2. This saved thousands of calls per frame.

2. Use fast clear depth more often

Intel driver complains about not being able to use fast depth clears because of scissor being enabled. Turns out that device_clear_render_targets() in wined3d device.c doesn't really need to do glScissor for Dota 2, it's probably an optimization that maps better to Direct3D driver.


A small patch including both optimizations is here:
https://gist.github.com/vrodic/6437312

This patch is a hack, and glScissor part probably breaks other apps, so this is just for Dota 2. It maybe could be made in a better way so it could be merged in Wine, but I'm not wined3d expert.

So how faster is it? A solo mid hero on a setup described in the previous blog post used to get 41 FPS. Now it gets 46-49 FPS. Native version is similar to optimized Wine, but in some situations it gets worse than Wine optimized.

Ideas for improvement:

Dota 2 for Linux needs  ~7500 calls per frame. Wine version, even after my optimizations needs 37000 (EDIT: just as I was writing this post, there were some improvements, now its about 22000).

There is probably a way to optimize this even more, but it's outside of the scope of an afternoon project,  like this was. I'd like to keep on digging though.

Wednesday, 14 August 2013

Dota 2 performance: Linux/native vs Linux/Wine vs Windows 7 on Intel GPU

So how well does the mega popular game Dota 2 work on Linux? I've had some time to make detailed tests on my Intel IvyBridge GPU laptop (Lenovo ThinkPad X230). The graphics settings are the same on all versions.


Dota 2 Windows binary under Wine 1.6
Startup: 35 secs (over ntfs3g userspace fs driver that is not that fast)
Mem: 1.0GB
FPS: 37.5 FPS
Power usage LP mode (patched wine): 34W

Dota 2 Linux native
Startup: 1 min 14 secs (native ext4)
Mem: 2.6 GB
FPS: 40 FPS

Dota 2 Windows native - Windows 7
Startup: 25 secs
Mem: 1.2GB (measured by Windows Task manager)
FPS: 80 FPS
Power usage LP mode: 24W

Test setup:

CPU: Core i5 3320M

Resolution: 1366x768

GPU settings (same on all Dota 2 versions): shadows MEDIUM, textures HIGH, render quality: HIGHEST, all other: OFF, vsync: disabled

GPU settings LP mode (for power measurments above): Shadowd low, effects OFF, textures MED, render quality: LOWEST, fpx_max: 30

Mesa version: git-8b5b5fe (with rendering regression fix from here: https://bugs.freedesktop.org/show_bug.cgi?id=67887)

Linux distro: Ubuntu 13.10, kernel 3.11 drm-intel-nightly, running LXDE

FPS Benchmark method:
in "dota 2 beta/dota/cfg/autoexec.cfg"
cl_showfps 2
playdemo test

For FPS measurement number: look at the last 240 frames when the demo is ending

Memory measuring: RES column with `top`
Startup time measuring: stopwatch until the map is loaded

Analysis of the apitrace trace file:

I've made a trace of Dota 2 with apitrace, revealing possible performance issues.

Before the first frame of the game is drawn 11038 shaders are compiled. That is most likely why the load time is so slow and memory usage is so high. In addition a lot of the shaders being used seem to be recompiled by the Intel driver when rendering frames.

There are 162 frames in the trace I've analyzed, 193 shader recompiles, and 643 different shader programs (each program has 1 VS and 1 FS) used.

In contrast, Wine version of Dota 2 compiles only 220 shaders.

Performance feedback from the Intel driver:

glretrace of apitrace prints driver performance warnings. A sample of some that repeat every frame. These include shader recompile warnings.

575332: glDebugOutputCallback: Medium severity API performance issue 13, Clear color unsupported by fast color clear. Falling back to slow clear.
576094: glDebugOutputCallback: Medium severity API performance issue 14, Failed to fast clear depth due to scissor being enabled. Possible 5% performance win if avoided.
577739: glDebugOutputCallback: Medium severity API performance issue 4, Using a blit copy to avoid stalling on 480b glBufferSubData() to a busy buffer object.
577801: glDebugOutputCallback: Medium severity API performance issue 8, Recompiling vertex shader for program 7901
577801: glDebugOutputCallback: Medium severity API performance issue 9, Didn't find previous compile in the shader cache for debug
577801: glDebugOutputCallback: Medium severity API performance issue 1, Recompiling fragment shader for program 7901
577801: glDebugOutputCallback: Medium severity API performance issue 10, Didn't find previous compile in the shader cache for debug
577801: glDebugOutputCallback: Medium severity API performance issue 3, FS compile took 2.266 ms and stalled the GPU

Warnings in sequence in URL below:
https://gist.github.com/vrodic/6235313

It's interesting that most of this warnings were added by request of Valve back in 2012:
http://lists.freedesktop.org/archives/mesa-dev/2012-August/025288.html




Conclusion:

If you have a memory constrained machine and want to run under Linux, maybe using Wine is a better choice.

I hope Valve cares enough about Linux to fix what they can on their side and work with folks from Intel to fix their performance problems.

You will probably be luckier if you run it on nVidia GPU, since people in general are claiming performance very similar to Windows. Though probably still with slower startup times and higher mem usage.

Some info on how to compile Mesa 32 bit on 64bit Ubuntu:

Unfortunately this doesn't work with old Ubuntu versions, just with 13.10. For old version you must also remove 64 bit versions of the compiler, which was a bit too much of a requirement for me.

apt-get install gcc-multilib g++-multilib binutils-multiarch libx11-dev:i386 libdrm-dev:i386 

and as configure complains install others

apt-get build-dep mesa -ai386 

wants to install too much and remove some 64 bit stuff I need, and we don't actually need llvm i386 dev stuff for  compiling just the Intel driver.

I use this script to compile just the i965 driver for mesa:

sh autogen.sh
make clean
export CFLAGS="-m32 -O3 -mtune=native -march=native -fstack-protector --param=ssp-buffer-size=4 -Wformat -Werror=format-security"
export CXXFLAGS="-m32 -O3 -mtune=native -march=native -fstack-protector --param=ssp-buffer-size=4 -Wformat -Werror=format-security"
export PKG_CONFIG_PATH=/usr/lib/i386-linux-gnu/pkgconfig
./configure --disable-egl --enable-glx-tls --with-gallium-drivers= --with-dri-drivers=i965 --enable-32-bit
make -j4

Sunday, 27 May 2012

Samsung Galaxy S2 vs Ubuntu PC performance

Introduction


It seems that many people assume that 1.2 GHz dual core mobile ARM CPU should be almost as fast as a PC CPU running on a similar frequency. They're wrong.

ARM cores are indeed more power efficient per square mm of surface on a same production process than Intel x86 and AMD64 architecture processors. Most of the efficiency comes from a simpler and more space efficient instruction set, but that advantage typically benefits only front-end of the CPU, which is not the biggest spender of those precious miliwatts.

The other reasons why modern dual or quad core mobile phones can run on a fraction of power that notebook or desktop (PC) CPUs need:

  • less computation units on CPU die (less SIMD, ALU, etc units)
  • smaller cache than PC CPUs
  • power gating parts of CPU (but laptop and desktop CPUs also do this for a number of years)
  • significantly slower DRAM interface  than PC CPUs, using slower DDR RAM (LPDDR2)
RAM speed significantly impacts many parts of phone performance. Executing complex JavaScript, image or video processing, Web page rendering are just some of the tasks that significantly benefit from having more RAM bandwidth. 

Your ARM device having significantly less of RAM bandwidth is also a big reason why you will probably avoid developing software on your new shiny ASUS Transformer Prime tablet/laptop (though I would certainly try:) )

So how much slower is your Android cell phone RAM than your PC RAM?


Unfortunately, I couldn't find any RAM bench-marking software that would run both on a Linux PC and on a un-rooted android device. There is a nice port of NBench, but NBench is a bigger benchmark and it needs some time before it prints out the one thing we need, the memory index. Also, it doesn't output MB/sec number, which is kind of unfortunate, since it's a really clear metric. 

So I found the really simplistic mbw (apt-get install mbw), made it even more simple (removed memcpy tests and left only the dumb array assignment part), and made Android NDK version of it.


RAMbandwidth
Source here. Be sure to close any apps before running it on a PC or your phone. Default array size being copied is 20 MB (the app needs 40 MB to perform the test) to better support low memory devices. 

Here are some results (20MB array size, 20 repetitions avg, run "mbw -t1 20 -n 20", default settings on RAMbandwidth ):
~5400 MB/sec - Intel Xeon X3430, DDR3 memory, under moderate MySQL load( 2009)
~2200 MB/sec - Intel Core 2 E8200, PC 6400 DDR2 RAM, Desktop PC (2008).
~1100 MB/sec - Intel Core duo L2400, PC 5300 DDR2 RAM on a  Thinkpad X60S laptop (2006). 
and our mobile contenters 
~500 MB/sec - Samsung  Galaxy S2 (2011)
~250 MB/sec - HTC Desire (2010)
~120 MB/sec - Raspberry PI (2012, under X, fbdev 720p it falls to ~90 MB/sec) 
~55 MB/sec - HTC Magic (2009, had to use smaller 10MB array size because of limited RAM available) 


Samsung Galaxy S2 sometimes reports around 440 MB/sec, and sometimes 550 MB/sec. I guess it depends where kernel allocates the memory, maybe one of the memory banks shares the bus with the GPU, GSM CPU or some other greedy device. 

It should be easy to post some test results of your own hardware, so please share. 

EDIT: Check comments for some more results



Sunday, 29 August 2010

Budget Surfer 0.0.1

Jučer sam htio istražiti koliko je našeg zajedničkog novca utrošeno na softverske licence za MS Windowse, Office i ostale proizvode više ili manje lako zamjenjive sa FOSS ekvivalentima. Jedine informacije koje sam zasad uspio dobiti su u dokumentu "Poseban dio Državnog proračuna Repulike Hrvatske za 2010. godinu i projekcije za 2011. i 2012." koji se nalazi ovdje http://mfin.hr/hr/drzavni-proracun-2010 .

Predpostavljam da je i Vizualizacija proračuna od projekta vjetrenjača napravljena iz istog podatkovnog izvora.

Kako bih produktivno gubio vrijeme odlučio sam napraviti mali program koji će importirati ovaj Excel u SQL bazu i omogućiti lakše "surfanje", filtriranje itd.

Ako se predpostavi da je softver u stavkama "INFORMATIZACIJA*", u podstavkama "Rashodi za nabavu neproizvedene imovine", onda ukupni iznos iznosi oko 24.5 milijuna kuna.

Informatizacija, Rashodi za nabavu neproizvedene imovine



No zanimljivo, misteriozne podstavke s naslovom "Rashodi za nabavu neproizvedene imovine" (koje nemaju nikad detaljnije podstavke) ne pojavljuju se samo u stavkama "INFORMATIZACIJA*", nego i npr. "RAČUNALNO KOMUNIKACIJSKA INFRASTRUKTURA U VISOKIM UČILIŠTIMA I JAVNIM INSTITUTIMA" od 14 milijuna kn i mnoge druge kategorije što nas dovodi do ukupnih oko 105 milijuna kuna.


Ono za što je predpostaviti da ima neku vrijednost u nabavkama softvera u Informatizaciji su stavke
"Nematerijalna proizvedena imovina" za koje je predpostaviti da se radi o softveru rađenom po narudžbi, tj koji se proizvodi za potrebe rada države. Ukupni iznos ovoga je oko 74 milijuna kuna.


Monday, 23 August 2010

Moj kupus, ŠBBKBB iliti TODO lista

Ako niste voljni čitati nadobudne i vjerovatno pretenciozne osobne rantove i brain dumpove slobodno preskočite ovaj post.


Pošto izgleda da je danas dan kada dajem otkaz još jednoj u nizu closed source (i closed mind dodao bih) firmi, možda je najbolje da se opustim uz nekoliko naivnih i praktičnih ideja za koje znam kako ih izvesti. Vjerovatno ću šarati od ideja za pojednostavljenje državne birokracije preko ideja za Android aplikacije, do bojnog plana za osvajanje javnog sektora od Linux desktopa u Hrvata.

No prvo o tome što mi je činiti u sljedećem razdoblju: Trebam naći poslodavca koji će shvaćati i prihvaćati da su mi briga za obitelj, prijatelje, zajednicu i moje sebične interese (fakultet recimo) na prvom mjestu. Vjerujem da takav već postoji, a ako netko ima ideju ili prijedlog slobodno se javite.

Ajmo polako s idejama:

Down to earth:

1. Proširiti postojeću Kostovu inicijativu za Android Market account tako da razvijemo i održavamo nekakav HULK-ov sistem donacija za domaće Free Software Android / Whatever aplikacije.

2. Android aplikacija koja skenira barcodove prehrambenih proizvoda i ima offline bazu sa crnom listom Ebrojeva u proizvodu. Samo uzmete proizvod u dućanu u ruke, skenirate mu barcod i dobijete crveno ili zeleno ovisno o tome ima li proizvod štetnih E dodataka (naravno postoji opcija za pregled detalja E brojeva a ako smo online i drugih detalja vezanih uz proizvode). Dodatno bi se ta radnja mogla iskoristiti da se proizvod zabilježi kao kupljen, te tako možemo imati i neki pregled / povijest potrošnje, a i jednostavniji način izbora namirnice kod brojanje kalorija ili neko drugo sofisticiranije praćenje prehrane. Naravno za organski proizvedene proizvode bez bar koda treba smisliti nešto drugo za jednostavno identificiranje (brzi izbor voće, povrće, boja, oblik, last recently used etc)


Blue sky:

Ideje u domeni odnosa države i građana:

I. Zdravstvena knjižica:

Kako relativno često mijenjam poslove, primjetio sam jednu glupu ostavštinu prošlih vremena: Posebnu zdravstvenu knjižicu koju iz nekog čudnog razloga treba mijenjati svaki put kada se promjeni radni status, a valjda i svaki put kada završite osnovnu, srednju ili fakultet. Ta zdravstvena knjižica ionako sadrži datum isteka koji ne znači baš ništa (jer osiguranje formalno može "isteći" i prije - o tome kako se uopće u socijalnoj državi može dogoditi da nemate osnovno zdravstveno osiguranje malo poslje). Dakle jedino što bi na njoj trebalo biti bitno je broj zdravstvenog osiguranja, odnosno OIB odnosno JMBG odnosno broj osobne izkaznice odnosno broj putovnice. Sve su to u biti ključevi za identifikaciju građana RH, i u teoriji bi bilo dovoljno imati recimo samo osobnu ili putovnicu (sa barcodom ili npr. qr codeom za strojno čitanje tog malog ključa). Status zdravtvenog osiguranja bi se transparentno u pozadini promjenio kada bi dobili prvi posao, dali otkaz, završili fakultet itd itd. Ionako ga uvijek zapravo imamo (neki imaju i bonus dopunsko ako su bez posla ili slično) i jedino što je bitno je da se to na nekog "knjiži" radi poreznih razloga, ali to stvarno ne znači da svaki put moramo mijenjati tu plastiku. Predpostavljam da fino zarađuje na toj plastici, a ako nije u pitanju neka korupcija, opet me fascinira glupost u javnom sektoru (nadam se da je samo glupost, jer tada je valjda lakše popraviti ovo).

A drugi problem/pizdarija oko zdravstvenog osiguranja je to što se može dogoditi da ostanete neosigurani jer ste recimo mislili da datum isteka koji piše na zdravstvenoj kartici zapravo znači datum isteka zdravstvenog osiguranja. Ne, to znači jedino i samo to koliko vrijedi taj komad plastike. U idealnom slučaju status zdravstvenog osiguranja bi se računalno provjerio prilikom svakog korištenja neke zdravstvene usluge, i gdje bi se korisnik samo upozorio da njegov status nije jasan (recimo zbog završetka studentskih prava gdje je uobičajeno da zdravstveno osiguranje sa roditelja/fakulteta "prelazi" na zavod za zapošljavanje ili budućeg poslodavca).

II. Još malo o zdravstvu:

Bilo bi lijepo kada bi laboratoriji za pretrage odnosno specijalisti automatizmom digitalno slali informacije o nalazima liječniku opće prakse, odnosno nadležnom specijalistu. Vjerujte, ovo bi uštedjelo kilometre i kilometre koji ljudi sami prelaze kako bi raznosili nalaze okolo liječnicima. U tom slučaju bi i liječnici mogli nazvati pacijente čim nalazi stignu, a ne da pacijenti moraju raditi upite svako malo. Dakle u računalnom žargonu, "digital event based a ne analog poll bazed event handling". Bilo bi lijepo i da pacijenti građani) imaju token za pristup centralnoj birokraciji gdje mogu sami pogledati svoj zdravstveni karton, rezultate pretraga, plan liječenja, dijetu, prisjetiti se ocjena u školi ili za one za nekom vrstom alzeihmerove bolesti, vidjeti koliko nekretnina imaju, tko su im roditelji, gdje su rođeni, itd itd :) I tako, ima tu puno etičkih i drugih pitanja koje treba razmotriti prije deploymenta. No predpostavljam da ne želimo da ovako nešto jednom za nas radi neki closed source sistem ili nedaj bože Facebook :)

U varijanti gdje JMBG ili OIB ili broj osobne ili broj putovnice stvarno postaju ono što jesu, a to su ključevi s kojim se identificira građanin-pojedinac, nije nam više potrebna niti radna knjižica niti pokaz niti Xica, a u idealnom slučaju (kada bi imali taj "token").

Od domaćih sustava, možda je upravo sustav iza Xice (ili studomata) baza za ovakvo nešto.


III. Stavimo Linux na desktope javnog i državnog sektora.

Stvarno, želimo li i danas, uz solidan i uoptrebljiv desktop na linuxu plaćati svake godine milijune za MS licence? Možda možemo otvoriti malu firmu koja bi se bavila supportom i specifičnim potrebama korisnika u državnom sektoru za novce utrošene na MS licence i time bar malo spriječiti odlijev novca odnosno nepovoljan omjer uvoza i izvoza?

IV. Pomognimo Marku Rakaru da http://proracun.pollitika.com/2010/korisnici.html ima više informacija, real time informacija, podataka o aktivnim natječajima u javnoj nabavi i slično ili nagovorimo državu/vladu/sabor da ovakave projekte kvalitetno odrađuju Narodne novine tj da ih financira sama država.

V. Pripremimo sve ove projekte tako da lako postanu dio nečeg većeg kada uđemo u EU ili se ujedinimo sa USA (kiberkomunist approved). Koristimo postojeća (Free software) riješenja gdje god možemo. Nemojmo se bojati ulagati u globalnu zajedničku infrastrukturu.

VI. Podržimo Zajedničko i zadružno stanovanje http://bit.ly/bEbcwK, Reciklirano imanje - Vukomerić http://bit.ly/arRK0U i slične projekte.

VII. Nacionalizirajmo ponovo Hrvatske Telekomunikacije ili barem:

Definirajmo jasnije javno pravo na DTK. Prisilimo HT na članstvo u CIX-u. Dajmo im dozvolu da postavljaju optiku, stavimo cijenu za najam niti na 100 kn po mjesecu po korisniku kako bi se brže vraćala investicija, ali svakako omogućimo da infrastrukturu od prvog dana mogu koristiti i alternativci. Regulirajmo maksimalne i minimalne cijene. Postavimo u zakon da svaka telefonska parica / veza može prenijeti neki normalni broadband a ne trenutnih 128K kako bi prisilili monopolistu da izgradi lokalnu infrastrukturu (ili stvorimo budžet za ovo dodatnim oporezivanjem operatera sa veliki tržišnim udjelom). Borimo se za simetrični broadband (sa korisnim upload brzinama). Borimo se za low latency broadband. Borimo se za 100megabitne, i gigabitne internet veze. Zadnje tri stavke omogućavaju i vrlo dobar peer to peer cloud - što nam treba za nekakav Diaspora-like social networking). Iskoristimo stvarno podatkovne kapacitete javnih poduzeća. Kad planiramo novu infrastrukturu za nova ili postojeća naselja, predvidimo i prostor sve bitno (da i optički internet pristup), kako nebi nepotrebno prekopavali kvart/naselje svako malo.

VIII. Dolazimo na razmjene vještina.

IX. Stvorimo računala koja možemo uzgajati kao što uzgajamo biljke i koja si stvaraju energiju sama kao što ih stvaraju biljke i koja imaju mrežu korjenja za povezivanje slično kao što i biljke imaju korijenje (prvi korak, pogledajte http://bbf.openwetware.org/).

X. Razno.

Diskusija o ovom blog postu nalazi se i na Friendfeed-u




Sunday, 4 July 2010

Making Android development more enjoyable

Here's some things Google should look into:

0. Include android market in the emulator/test environment

That should be fast and easy to do.

1. Try finding some ways to speed up the compile/test cycle:

a) Avoid moving too much stuff arround
Currently every time an .apk file must be produced (compressed) on the developers machine, containing all the code and application resources. This apk that contains everything is then transferred to the phone or emu. It's then unpacked on the phone and the install procedure is run. It doesn't matter if you just changed one class file, the whole thing is moved around.

I understand that permissions are managed during the install process, but this could be solved in some other way like trusting the code for that app by default.

b) Make the test environment run in native code during the development process
I know that using an emulator was a nice and fast solution for you, but a lot of time and energy is lost on emulating ARM. Davlik runs on x86, and all other code also. Why don't we just make a jailed/chrooted native environment for testing available? I know that this is not a straightforward thing to implement on Windows, and there might be difficulties making this run on Mac OSX, but it's really worth it.



2. Improve the visual layout editor.

There's so many ways this could be improved, but making it faster/more intuitive is the general idea.

One quick suggestion: Try making it easier to jump to respective code implementations or have an option to generate event handlers etc. code if no code referencing that UI object is found.

Example of a great code editor/UI designer integration is Borland Delphi. Even really old versions have ease of use that Android developers could only dream of. Android API is more abstract and UIs have relative layouts, but most of the great concepts from Delphi still apply.

Item 1 b) could also create possibilities for the UI designer tool.