Home Phoronix Phoronix Forums X.Org Videos From FOSDEM 2008

RadeonHD IRC Logs For 2009-6-13

Search This Log:


agd5f: lvella: zaphod mode only works with radeon
lvella: so if I want two screens, I need radeon, but if I want HDMI audio, I need radeonhd...
lvella: cool
lvella: that zaphod-lolz branch seems a bit dead, is it needed?
MostAwesomeDude: I think all the working zaphod code is in master.
lvella: about using the mirrored displays
lvella: the resolution of the tv is bigger than the monitor
lvella: can I set the display to the tv maximum resolution?
wirry: hi there, i still have a problem with getting 3d working on my "old" laptop (c2d, x1600m) with gentoo...it worked well with debian, but i want to switch to gentoo on all my computers and this is my testing system so i dont know yet where to find each config/log excatly
wirry: im using kernel 2.6.30 and radeonhd from latest git... http://vanessa.zapto.org/~nrx2g/gentoo_radeon/ <-- there are my xorg and install log
wirry: ohh well...the problem i have: the system looks up when i try using 3d
wirry: i can still ssh to it after that, but there is no chance to get x working again
wirry: added xorg.conf to the logs
rah: 34 hours Merge remote branch 'main/radeon-rewrite' Dave Airlie 120 -18854/+16432
rah: (mesa master)
rah: looks like there are some big changes happening
uzi18: :)
uzi18: i'm waiting for kernel 2.6.30 on my distro ;) as i'm dev i could do this but ... don't know this place:P
udovdh: uzi18, ?
udovdh: why wait when you can compile yourself?
udovdh: btw: how current is dri in 2.6.30
uzi18: udovdh, just found - there is kernel-vanilla in my distro updated to 2.6.30 and now my machine is being building this packacges
wirry: hi, using 3d still looks up my laptop (c2d, x1600m), im using gentoo amd64, 2.6.30, logfiles & xorg.conf: http://vanessa.zapto.org/~nrx2g/gentoo_radeon/
taiu: wirry: 3d is done by mesa, can you get the latest (git) version for that
wirry: sure i can
wirry: hmm taiu it worked much longer now (over 10 seconds) but its still looking up
wirry: ok, i was just lucky...
wirry: after reboot it looked up instantly
taiu: wirry: actually you might be hitting this https://bugs.freedesktop.org/show_bug.cgi?id=21849
taiu: wirry: you would need 2.6.30 final or the patch from that
wirry: i am using the 2.6.30 final
taiu: hmm, ok I looked wrong line ...
taiu: wirry: what apps are you trying with?
wirry: glxgears and torcs
wirry: ohh...when it looks up i can still move the mouse
wirry: any other ideas?
PyroPeter: is "ATI Radeon HD 4350" supported by the radeonhd driver? it has the rv710 chipset
PyroPeter: and if it works, is 3d acceleration supported, too?
PyroPeter: .oO( why are all good opensource projects documentated so bad? )
pazof: .oO(why do we accept to buy cards whitout the doc to use it)
bridgman: PyroPeter; yes, current versions of radeonhd support the rv710/HD4350
bridgman: 3D is in another driver (mesa) and is under development
bridgman: whether you use radeon or radeonhd you also need a current drm in order to get EXA and Xv acceleration
bridgman: the easiest way to get that drm is to pick up the 2.6.30 kernel
bridgman: pazof; are you talking about hardware docs or design docs for the existing open source drivers ?
pazof: yes
bridgman: have you read the existing hardware docs at http://www.x.org/doc/AMD ?
pazof: my chipset Is missing
pazof: R620
bridgman: we don't have separate docs for every specific chip
bridgman: the 6xx/7xx 3d docs cover the entire family, including chip-specific differences
pazof: and we have all what we need to develop the mesa driver ?
bridgman: on the display and MC side the 620 is covered by rv630 and m76 docs
pazof: http://www.x.org/doc/AMD << 404
bridgman: there are a couple of new blocks in 620/635 (and subsequent chips) which we're still working on
bridgman: docs, not doc
pazof: hm, I didn't se all this stuff before :)
pazof: see*
bridgman: it took a while to write
pazof: long live to AMD :)
bridgman: yeah !! ;)
bridgman: sorry, missed a question; yes there's enough to develop a mesa driver
bridgman: we've actually got most of it written, now we're trying to make it work ;)
bridgman: http://cgit.freedesktop.org/mesa/mesa/log/?h=r6xx-rewrite
bridgman: http://cgit.freedesktop.org/mesa/mesa/tree/src/mesa/drivers/dri/r600?h=r6xx-rewrite
bridgman: under normal conditions the mesa driver would probably be finished by now, but all the people who know mesa are off working on other cool things like kernel modesetting, memory management, gallium3d and radeon-rewrite
bridgman: so we're learning mesa and writing the 6xx-7xx support ourselves ;)
yangman: mesa is big and scary
yangman: well, it's certainly big. I haven't decided if it's scary yet ;)
bridgman: my first impression from a distance recalled a line from Swordfish
yangman: ....
yangman: the highlighted quote on IMDB for Swordfish is "I'm not here to suck your dick, Stan." >.>
bridgman: not that one
bridgman: "This is not a nice place, Stanley"
yangman: yes, that would make a lot more sense ;)
bridgman: in fairness, it was a lot nicer for the GPUs that existed at the time it was designed
bridgman: and Gallium3D seems to bring it up to date nicely
bridgman: but there is a non-trivial learning curve
bridgman: at least we think we know why it's not drawing properly now
yangman: I've been reading up on gallium in my spare time
bridgman: what do you think ?
yangman: all those presentation slides don't make much sense until you actually start getting into code
bridgman: yeah, I found one or two header files that seemed to be the best intro
bridgman: slides always try to explain why something is a good idea
bridgman: rather than what it is ;)
yangman: I think I'm finally starting to get how individual component works. still a bit foggy on how it's all tied together with winsys, but I think I can grasp it
bridgman: yeah, it's really hard to get the partitioning just right when you create something new
bridgman: the nice clean design never quite survives the first implementation
bridgman: but so far it seems to have held up better than most
bridgman: the pipe/winsys split will always seem awkward if we're only developing for one winsys ;)
bridgman: huh; just noticed that redbook hello is only supposed to draw a white square
bridgman: I had been looking at all the funky colours we were getting and wondering what it was supposed to look like ;)
bridgman: it seemed pretty complicated for a hello program
yangman: not sure how much help I'll be actually bringing gallium3D up. not too much time for coding these days
yangman: been doing a bit of adventuring: http://wiki.xkcd.com/geohashing/2009-06-09_49_-123
yangman: I'm still planning to have a go at LLVM and r6xx, though
bridgman: xkcd has a wiki ?
yangman: it does
yangman: at least 2, actually
bridgman: llvm will be interesting, particularly making it play nice with the VLIW instructions on 6xx and higher
yangman: the main one and then the one for geohashing
bridgman: are the coordinates for geohashing randomly picked, or did someone actually go into the swamp ?
amarsh04: recently spent quite a few hours going through all of xkcd's cartoons including mouse-overs
yangman: they're psudorandom. using MD5, the date, and the nearest day's opening DOW number
yangman: Vancouver is one of the more fun ones because the points are more often in some crazy wilderness area or miles off shore
yangman: everyone's extra motivitated to show on the rare days it's accessible
bridgman: yeah, I always liked that about vancouver; everything around toronto is flat & has cows on it
bridgman: sort of like Munich
yangman: heh
yangman: http://wiki.xkcd.com/geohashing/Vancouver%2C_British_Columbia/Accessibility
yangman: the thing about LLVM is the learning curve is ridiculous. not concept wise, but just grasping the code
yangman: a "trivial" backend is probably a few thousand lines
yangman: that's my estimate for something that'll emit a basic ADD operation
bridgman: yeah... I like the idea of LLVM spitting out TGSI and then GPU-specific code going down to the hardware instruction set
bridgman: the backend for each GPU is going to be pretty different anyways AFAICS
yangman: as I understand it, making it any useful would hinge on actually implementing optmizers per card
yangman: IR->instruction is fairly simple matching
yangman: but it'll give ridiculously slow code
yangman: so there's some serious operation reordering needed
bridgman: yeah, at least for our GPUs... it's possible that (for example) Intel and NVidia GPUs could share a common optimizer since both are scalar engines
bridgman: not sure
bridgman: not sure how much compiler frameworks have changed in the last decade or so
bridgman: maybe it's easy to write hardware-specific optimizers these days but it doesn't sound like it
yangman: yeah, I can't really say. I can optimize by hand, but haven't studied it formally
yangman: well, with all the interest in GPGPU, maybe people with lots of grant money will start trying writing hardware-targeted LLVM optimizers if/when we have a working-enough backend
bridgman: yep... I suspect that will be a second phase though... first step will be to have enough different workloads running through TGSI for devs to get a feel for what kinds of optimizations are most important
Nightwulf: hi all
yangman: hey Nightwulf
Nightwulf: hi yangman
bridgman: if all the instructions come down operating on 3 or 4 component vectors then 1:1 translation into hardware instructions wouldn't be so bad
bridgman: but if the workload is all single component (likely with GPGPU, I guess) then packing that work into VLIW will take a lot of smarts in the backend
bridgman: mostly making sure data is in the right registers/components to begin with
yangman: well, we need to be able to turn, say, 24 IR instructions into MULADD_D2
udovdh: hello
yangman: morning
bridgman: IR ?
Nightwulf: good news: the background corruptions on r6xx using kde 4.x are gone with 1.2.5 and kernel 2.6.30 :D
bridgman: oh good, finally ;)
bridgman: probably drm fixed it ?
yangman: intermediate representation
Nightwulf: bridgman: yes, i guess
yangman: I should clarify I'm mostly talking about going straight from LLVM IR to r6xx packets
Nightwulf: bridgman: but there are also bad news too
yangman: I'm not sure what it'd be like doing LLVM IR->TG... something->r6xx
bridgman: I think that is the plan, isn't it ?
bridgman: Nightwulf; what's the bad news ?
bridgman: I know I just built new everything and my RV620 is locking up
Nightwulf: bridgman: after playing some videos, i see a slowdown which leads to stuttering video and that get's that bad, that you have to restart the x server
Nightwulf: bridgman: radeon doesn't show that behaviour
bridgman: ok, that sounds relatively easy to find; there isn't much accel code difference between the two
bridgman: hold on - 5xx or 6xx/7xx ?
Nightwulf: bridgman: r680
yangman: yeah, I think the latter is what they want to implement. it's hard to think about, though, since I don't know what TGSI is like
yangman: although, wikipedia says "When Gallium targets LLVM the TGSI code is converted to the LLVM instruction set."
yangman: :\
bridgman: yeah, the funny part is that if OpenCL is implemented per Zack's summary you'll get OpenCL programs going through Clang and LLVM to output TGSI, then the TGSI going through LLVM again to the native hardware instructions
bridgman: there would be massive pressure to eliminate the middleman (TGSI) even though that's the key to portability and re-use
yangman: that still makes sense
yangman: ... maybe
yangman: I'd have to think about it once something's actually implemented
yangman: i have LLVM installed but haven't bothered to get clang going
yangman: they both do silly things with the compile/install process that doesn't play nicely with portage
bridgman_: hmm... internet connection went "poof"
Nightwulf: bridgman_: shout if i can second any possible reasons whith using the two drivers with different optionsin xorg.conf
agd5f: Nightwulf: does disabling sound in your movie player fix it?
Nightwulf: agd5f: I'll check that
Nightwulf: anything other I can check?
MostAwesomeDude: TGSI-based stuff is complex, but fast.
MostAwesomeDude: Instead of unpacking things into big sets of state, TGSI in code is set up as unions and bitfield structs.
agd5f: Nightwulf: it might be the crtc vline stuff, could try disabling that in the code
Nightwulf: agd5f: k, I'll check both hints later this evening
bridgman_: I guess you could get a fair amount of improvement by forcing some optimizations above the TGSI layer, eg combining scalar operations into vector instructions before generating TGSI
bridgman_: that would make it a lot easier to generate efficient hardware code below the TGSI layer...
MostAwesomeDude: bridgman_: Basically, we want to do things like strengh reduction once and only once.
MostAwesomeDude: If we can all agree on where to do that, then we're set.
bridgman_: yeah, that's going to be the challenge
bridgman_: the question is whether everyone will agree that one type of TGSI operation is less costly than another
bridgman_: if so, then the optimization can go above the TGSI layer
MostAwesomeDude: Well, I think that the main bottleneck is still the slang compiler for doing GLSL->IR.
bridgman_: yep... although hopefully the same solution will work well for everyone there
MostAwesomeDude: What I'd do in this case is permit LLVM IR as a fast path for frontends that prefer it.
MostAwesomeDude: So the two paths are GLSL->TGSI->native and GLSL->LLVM->native.
MostAwesomeDude: And then make both the TGSI and LLVM backend emitters do all the optimizations.
MostAwesomeDude: LLVM's optimizations are free; TGSI will need a lot of stuff added to it, but most of the code for doing it is (fortunately) already there.
bridgman_: doesn't that mean you need more optimizers, one per hardware backend ?
MostAwesomeDude: Well, for which IR?
bridgman_: I guess my main objection is with the "LLVM optimizations are free" statement
MostAwesomeDude: LLVM's optimizations are all generic; LLVM uses the HW definitions to pick and choose what it can optimize.
bridgman_: I don't think LLVM knows how to optimize for vector instruction sets, does it ?
MostAwesomeDude: Well, kind of. It knows how to optimize for SIMD and pick SIMD insts.
bridgman_: maybe I need to go through the HW definitions again
bridgman_: yeah, but the SIMD part is invisible on our GPUs, it's the VLIW/superscalar part that is visible to the compiler
MostAwesomeDude: We may have to teach LLVM about how to handle instruction sets that only operate on vectors; it wasn't (exactly) designed for this.
bridgman_: it's more than just vectors, isn't it ? 6xx and higher can sorta-kinda do 5 different operations in one instruction word
bridgman_: so far we mostly use that for vectors
bridgman_: (pixels, vertices etc.)
MostAwesomeDude: Yeah, r6xx will be a very interesting things.
MostAwesomeDude: *thing, even.
MostAwesomeDude: But I'm fairly confident that LLVM can handle it.
MostAwesomeDude: After all, LLVM's got x86, Sparc, ARM, Cell... all kinds of very different instruction sets.
bridgman_: I guess the question is how it handles something like Itanium
bridgman_: all of the processors you mentioned are scalar + optional simd
bridgman_: any superscalar processing is extracted by hardware from a scalar instruction stream
MostAwesomeDude: Oh, it does have an IA64 backend.
bridgman_: I think I saw a couple of IA64 boxes in the office...
bridgman_: hmmm
MostAwesomeDude: The basic theory behind LLVM is similar to Gallium; write a backend once and your optimizations will be forever effortless. :3
MostAwesomeDude: Which implies that if we *can* express an r600 in terms of LLVM, then we'll be set.
bridgman_: agreed; the question, I think, is whether LLVM's existing optimizations will actually make writing that backend any easier
bridgman_: when I first went through the docs I didn't see anything that seemed particularly useful for superscalar instruction sets
bridgman_: then again I might not recognize something useful if I saw it ;)
bridgman_: the impression I took away was that LLVM expected you to write an independent optimizer for that kind of stuff
MostAwesomeDude: Hm.
MostAwesomeDude: Possibly, but I don't think so.
MostAwesomeDude: We might have to write our own optimization passes, if we see something that *should* be optimized away, but isn't.
bridgman_: yeah... but mixing two optimizers can be mighty painful
bridgman_: I guess we'll have to see
bridgman_: the good news is that I'm seeing a lot more Google hits on "llvm vliw" than I did a year ago ;)
MostAwesomeDude: Hm, I can't find it, but...
MostAwesomeDude: LLVM is based on the idea of multiple passes.
MostAwesomeDude: So each optimization lives in its own pass.
MostAwesomeDude: So a pass that rewrites presubtractions, for example, might be added by us.
MostAwesomeDude: Passes for things like DCE already exist, as well as legalization and lowering.
MostAwesomeDude: Big list of all passes: http://llvm.org/docs/Passes.html
bridgman_: the trick is that you really need the VLIW-ness to be considered in the middle of the optimization process, but then LLVM doesn't have a good way to carry the VLIW clustering through the rest of the pipe
bridgman_: here's one discussion I found :
bridgman_: http://www.nabble.com/VLIW-Scheduling-td857833.html
bridgman_: talks about establishing a set of "meta-instructions" but that works a lot better for a 2-issue VLIW than a 5-issue ;(
bridgman_: anyways, it'll be fun ;)
MostAwesomeDude: Yeah, although there *are* cool macros to generate insts, I really would rather not have to lower to 5-inst macros.
MostAwesomeDude: There's also a few other really fun things for graphics cards.
MostAwesomeDude: No pointers, no memory management.
MostAwesomeDude: (Well, no arbitrary pointers. There's an address stack.)
MostAwesomeDude: No predefined calling ABI either.
bridgman_: just noticed something... marcheu mentioned there might be work being done in the LLVM core to add direct support for VLIW
bridgman_: if that's the case then I agree this gets a *lot* easier
MostAwesomeDude: Woot.
bridgman_: here's a big heap of interesting presentations, albeit a few years old :
bridgman_: http://www.ice.gelato.org/about/apr06_presentations.php
bridgman_: general consensus seems to be that you need to explicitly recognize vliw early in the compilation pipe
bridgman_: guess we should start a petition to add VLIW to LLVM ;) ;)
MostAwesomeDude: I think that we'll probably be the ones adding it, in that case.
MostAwesomeDude: s/we'll/TG\/VM/
MostAwesomeDude: ...
MostAwesomeDude: I just escaped "TG/VM" without thinking about it. I'm not sure if I'm okay with this.
bridgman_: perversely enough it looks like gcc already has it
MostAwesomeDude: Not surprising; GCC has a lot of targets.
bridgman_: looks like LLVM on Itanium currently ignores VLIW, then an external assembler handles assembling instructions into groups
MostAwesomeDude: WTF.
bridgman_: and I guess you hope the register allocator did a nice job
bridgman_: it's possible that is old information but I haven't seen anything to counter it
bridgman_: then again if there is it'll be in LLVM lists/forums and I haven't been there for a while
MostAwesomeDude: Yeah, I haven't been keeping up with it, really.
bridgman_: oh well; I have vacation coming up soon ;)
lvella: what are the implications of zaphod mode? because I managed to extend my desktop across two monitors with radeonhd, but as if they were just one screen
lvella: do I really need zaphod mode to be able to see a movie on my tv by using "DISPLAY=:0.1 mplayer movie.avi"?
bridgman_: I think zaphod mode gives you two independent screens, then you can tie them together with xinerama
bridgman_: I guess in your case you probably wouldn't want to...
lvella: I would not want to tie them with xinerama
lvella: I would prefer them trully independant
agd5f: lvella: that's what zaphod mode does
lvella: ok
agd5f: the driver loads twice, once for each head and they are treated as separate X screens
lvella: agd5f, don't you sleep?
agd5f: lvella: right now only radeon supports it
agd5f: lvella: sure :)
bridgman_: he slept last Tuesday
lvella: hehe
lvella: is hdmi audio is only available on radeonhd?
agd5f: if you think about it the notion of sleeping ~8 hours at a time is a new concept
agd5f: lvella: yes
agd5f: though most of history, people tended to nap, since there was always stuff to do
agd5f: you couldn't sleep through the night as you had to tend to the fire to check on this or that
agd5f: perhaps that's a more natural pace
agd5f: plus, IIRC, most "biological days" (what your body considers a day) aren't 24 hours so you're always fighting with nature
lvella: didn't knew about this...
agd5f: but, I actually do sleep at night like most folks :)
MostAwesomeDude: is the one with the weird sleep schedules
agd5f: it's just that when I go to bed varies day to day
lvella: whoa! just enabled DRI and EXR, nice xv playback!
lvella: hdmi audio is not working, any clues?
lvella: aplay -l shows
lvella: placa 1: HDMI [HDA ATI HDMI], dispositivo 3: ATI HDMI [ATI HDMI]
lvella: but I get no sound with:
lvella: mplayer -ao alsa:device=hw=1.3 music.mp3
Joy: i get this in my xorg log:
Joy: (EE) RADEONHD(0): rhdAtomLvdsDDC: unknown record type: 0
Joy: (II) RADEONHD(0): Query for AtomBIOS Get Panel EDID: failed
Joy: should i report it somewhere?
bridgman_: Joy; I think so; either bugs.freedesktop.org or the radeonhd mailing list