IRC logs of #tryton for Wednesday, 2011-07-13

chat.freenode.net #tryton log beginning Wed Jul 13 00:00:01 CEST 2011
-!- elbenfreund(~elbenfreu@g225234025.adsl.alicedsl.de) has joined #tryton00:12
-!- alimon(~alimon@201.158.247.118) has joined #tryton01:11
-!- elbenfreund(~elbenfreu@g225234025.adsl.alicedsl.de) has joined #tryton01:28
-!- alimon(~alimon@201.158.247.118) has joined #tryton02:29
-!- dfamorato(~dfamorato@2001:470:5:630:cabc:c8ff:fe9b:26b7) has joined #tryton02:58
-!- yangoon1(~mathiasb@p549F39C4.dip.t-dialin.net) has joined #tryton05:00
-!- gremly(~gremly@200.106.202.91) has joined #tryton05:19
-!- helmor(~helmo@46.115.22.167) has joined #tryton05:51
-!- sharoon(~sharoon@c-76-109-201-37.hsd1.fl.comcast.net) has joined #tryton06:00
-!- alimon(~alimon@189.154.110.53) has joined #tryton06:07
-!- enlightx(~enlightx@static-217-133-61-144.clienti.tiscali.it) has joined #tryton08:04
-!- bechamel(~user@host-85-201-144-79.brutele.be) has joined #tryton08:06
-!- enlightx(~enlightx@static-217-133-61-144.clienti.tiscali.it) has joined #tryton08:25
-!- pjstevns(~pjstevns@helpoort.xs4all.nl) has joined #tryton08:51
-!- helmor(~helmo@2.212.106.237) has joined #tryton08:54
-!- ralf58_(~quassel@dslb-088-071-224-216.pools.arcor-ip.net) has joined #tryton09:20
-!- bechamel(~user@cismwks02-virtual1.cism.ucl.ac.be) has joined #tryton09:22
-!- nicoe(~nicoe@ced.homedns.org) has joined #tryton09:44
-!- elbenfreund(~elbenfreu@f055003074.adsl.alicedsl.de) has joined #tryton09:48
-!- cedk(~ced@gentoo/developer/cedk) has joined #tryton09:48
-!- enlightx(~enlightx@static-217-133-61-144.clienti.tiscali.it) has joined #tryton10:09
-!- pjstevns(~pjstevns@a83-163-46-103.adsl.xs4all.nl) has joined #tryton10:18
-!- reichlich(~reichlich@p5793D42F.dip.t-dialin.net) has joined #tryton11:23
-!- cheche(cheche@46.25.80.67) has joined #tryton12:06
-!- ccomb(~ccomb@94.122.84.134) has joined #tryton12:12
-!- mhi(~mhi@p54894D47.dip.t-dialin.net) has joined #tryton12:18
-!- ccomb(~ccomb@94.122.84.134) has joined #tryton12:21
-!- sharoon(~sharoon@204-232-205-248.static.cloud-ips.com) has joined #tryton12:53
-!- ccomb(~ccomb@94.122.106.169) has joined #tryton13:07
-!- reichlich(~reichlich@p5793DBDB.dip.t-dialin.net) has joined #tryton13:11
-!- uranus(~uranus@96.57.28.107) has joined #tryton13:52
-!- vladimirek(~vladimire@adsl-dyn24.78-98-14.t-com.sk) has joined #tryton14:21
-!- elbenfreund(~elbenfreu@p54B92C1A.dip.t-dialin.net) has joined #tryton14:28
-!- sharoon(~sharoon@2001:470:5:630:e2f8:47ff:fe22:f228) has joined #tryton14:29
-!- dfamorato(~dfamorato@2001:470:5:630:cabc:c8ff:fe9b:26b7) has joined #tryton14:54
dfamoratobechamel: ping14:55
bechameldfamorato: hi14:55
dfamoratobechamel: hi14:55
dfamoratobechamel: Did you get my e-mail yesterday ?14:56
bechameldfamorato: yes and I check the code14:56
bechameldfamorato: but didn't took time to test it14:56
bechameldfamorato: is it fast ?14:56
dfamoratobechamel: Great, so what are your toughts14:56
dfamoratobechamel: It is almost the same time it takes to load the tryton pool14:56
bechameldfamorato: my thought is that I did check the sphinx doc and discovered that the python binding are available in the source archive14:57
dfamoratobechamel: Yes, they are.... for querying the searchd server14:57
bechameldfamorato: so I'm curious about the perf penalty of using the python binding vs let sphinx read the db14:57
cedkdfamorato: I was a little bit suprise to see that you use SQL queries to read Tryton's Model values14:57
dfamoratobechamel: Which is what we are going to use to implement with your unified search filed14:58
bechamelcedk: actauly it has the advantage of not interfering with the trytond instance that serve the clients14:59
cedkbechamel: what about function field, translation etc.14:59
sharooncedk: dfamorato: bechamel: probably time to merge two projects - use pysql to generate the sql ??15:00
bechamelcedk: yes, it's the drawback :)15:00
dfamoratocedk: We can implement a function on the Postgres to read template_name from ir_translations15:01
dfamoratobechamel:  This sphinx python api -> http://code.google.com/p/sphinxsearch/source/browse/branches/rel201/api/sphinxapi.py15:01
dfamoratobechamel: It's for querying the searchd instance15:01
dfamoratobechamel: It's not a native driver or something like that15:01
bechamelcedk, dfamorato: I think about an intermediate solution: a script that import the trytond pool and use it to feed sphinx, like that the master trytond instance is not directly depedant of spinx15:02
dfamoratocedk: bechamel , Yes, the current implementation does not require any additional change from the user15:02
dfamoratocedk: bechamel , Furthermore... future developments, the module developer does not need to worry about full text-indexing or not15:03
dfamoratocedk: bechamel , So, Tryton users will have the option to implement or not the Sphinx Search if they want....15:04
dfamoratocedk bechamel, If they want to implement...  Just like a standard Tryton module15:04
bechameldfamorato: yes, but to be able to index function field, I think the solution is to use xmlpipe2 to push data to sphinx15:05
bechamelI'm also thinking about indexing attachement but this bring other issues, we will se later if there is enough time to do it15:06
cedkdfamorato: but how ModelStorage.search will know to use Sphinx or not?15:06
sharoonbechamel: full text search AFAIK works only on char and text fields... how many of our function fields are actually char / text ?15:06
bechamelsharoon: those are the issue :), to index attachement we need odt2txt, pdf2txt, etc15:07
dfamoratobechamel, cedk ; Sharoon is right, the other fields that are not "string" char text are considered attributes15:07
bechamelsharoon: oh sorry I though you where talking about attachement indexing15:07
dfamoratocedk bechamel , So, in theory, they could only be used for sorting (and relevance ranking)15:07
bechameldfamorato, sharoon: party.full_name and stuffs like that15:08
cedkand the translation ?15:08
dfamoratocedk bechamel I get it15:08
cedksorry but you have to use the ORM of Tryton to get right values to index15:08
sharoondfamorato: so xml pipe is your answer!15:09
dfamoratocedk: Translation can be retrieved from SQL as well.15:09
bechamelcedk: what do you think about leaving indexing outside trytond itself ?15:09
sharoondfamorato: i think you will also need inherited indexes (indices) for each language that is translatable in tryton15:10
dfamoratosharoon: You are right. I already inherit data-sources15:11
cedkbechamel: don't understand15:11
dfamoratosharoon: I can inherit indexes as well, basically we define which morphology(stem) to use on the inherited index15:11
dfamoratosharoon:  not hard to implement (at least on current implementation)15:12
dfamoratosharoon:  might be harder if I use the xmlpipe15:12
bechamelcedk: my initial idea was to implement a hook in create and read, in order to push data to sphinx15:12
cedkbechamel: yes agree with that15:12
bechamelcedk: but this means that if sphinx is down or slow, trytond will get slow/unresponsive15:12
dfamoratobechamel: That is exactly my concern.... pushing data to sphinx15:12
sharoondfamorato: is there a concept of push to sphinx ? i have only seen sphinx pull data15:13
cedkbechamel: but it was decided to use a mq15:13
cedkbechamel: mqueue15:13
bechamelcedk: not really decided, it was an option15:13
bechamelcedk: mq means another dependancy15:13
cedkbechamel: or just a table in Tryton, I don't care15:13
bechamelcedk: what appends is sphinx is down, and the trytond server is under heavy use ?15:14
bechamelcedk: ok this is maybe better15:14
cedkbechamel: and what happens if the database server is down but trytond is running ?15:15
bechamelcedk: yes of course, but adding a new spof is not a good idea (especialy it's possible to have trytond running correctly while sphinx is down)15:16
cedkbechamel: just catch connection error to sphinx and fallback to default behavior15:17
cedkbechamel: and put a log message15:17
bechamelcedk: and how to know what we need to index once sphinx is back ?15:17
cedkdfamorato: so now, you have only a pull method to fill sphinx ?15:18
cedkbechamel: because the table sphinx_job will be filled15:18
dfamoratocedk: No so sure, I am checking... xmlpipe2 seems to be pull15:18
cedkdfamorato: what is the current design?15:18
sharoondfamorato: pull / push ?15:18
cedkdfamorato: have you a small schema?15:19
dfamoratocedk: This new xmlpipe ( http://sphinxsearch.com/docs/2.0.1/xmlpipe2.html ) appears to enable push streams15:19
bechamelcedk: My idea of a "sphinx table" is to keep the id of the time of the last indexed record, and a cron would be responsible to compare this time with current recored and index the new/updated records15:19
dfamoratocedk: Sorry, no schema... current module design is pull from tryton pool15:19
bechamelcedk: no need to copy in a table records that are already in the db15:20
bechamelcedk: the only exception I see is when we delete records15:20
bechamelcedk: we have to store all the ids (until the next push)15:20
-!- saxa(~sasa@189.26.255.43) has joined #tryton15:21
dfamoratobechamel: using xmlpipe2 we can put deleted records in an "kill list"15:22
bechameldfamorato: yes, but I'm talking about the "sphinx is down" scenario15:22
bechamelI mean, instead of trying to push data synchronuously and put stuff in table/queue when there is a problem, I propose to just use a cron that push all updated data from his last run and at the end just write in the db the datetime if his last run15:23
bechamel*of his last run15:24
-!- pheller(~pheller@c1fw226.constantcontact.com) has joined #tryton15:24
dfamoratobechamel: An what is the time that you intend to run this cron15:24
dfamoratobechamel: every minute ?15:24
bechameldfamorato: yes or every 10 second15:25
sharoonbechamel: i feel this is too close for a cron task.... might be better to make it async then and execute on an as and when data appears ?15:25
bechameland IMO it should run in a separate process15:25
sharoonbechamel: probably use triggers15:25
bechamelsharoon: triggers are working synchronuously, no?15:26
sharoonbechamel: could be easily made async15:26
bechamelsharoon: it will cost an extra thread for each db access15:27
cedkI'm not sure about the design of storing what need to be updated because you can not know15:28
bechamelcedk: know what ?15:28
sharoonbechamel: cedk: i think there is a strong confusion here about `search` and `full text search`15:29
cedkbechamel: when to update the index for a field15:30
cedksharoon: I don't think15:30
sharooncedk: are you planning to completely replace ModelSQL.`search` with sphinx functionality ?15:30
cedksharoon: no just the like operator15:30
bechamelsharoon: the last time we talked about providing it trough search_rec_name15:31
bechamelcedk: do you have an example where we don't know what to index ?15:31
cedkbechamel: like that no, but I'm pretty sure we got function fields that depends on the value of other Models15:32
cedkbechamel: and even if we don't have right now in Tryton, it is possible to get it15:32
bechamelcedk: good point15:32
cedkbechamel: and I don't want we implement a store=True like OE15:33
bechamelcedk: :)15:33
bechamelcedk: and with cache_invalidation methods :)15:33
cedkso I'm wondering if the sphinx index should not be filled like a google does with web15:33
cedkwith perhaps a table with modified record to be indexed first15:34
bechamelcedk: actually the problem of "field that depend of other field" is still there event with a queue (or queue-like table)15:34
bechamelcedk: ok so whe need a way to constantly  re-index the db in the background15:35
dfamoratobechamel: you mean re-index the whole db ?15:36
dfamoratodfamorato: each time ?15:36
bechameldfamorato: 1) yes  2)no15:36
cedkdfamorato: yes but like a crawler15:36
cedkdfamorato: + a specific list a record to index in priority (last modified and newly created)15:37
dfamoratobechamel: re-indexing the database is what thakes most of the time on sphinx search15:37
bechameldfamorato: yes but the idea is to do it slowly, to not kill postgresql15:38
cedkdfamorato: not a re-index but update on random selected records15:38
bechameldfamorato: like index 1000 records and then sleep some time15:38
dfamoratocedk: we can index the "base index"  + the DELTA15:38
cedkwe also need a table for removed records15:38
cedkdfamorato: don't understand15:38
bechamelcedk: the deletion list will be like the "hot record" list15:39
dfamoratocedk: if we build a master index every 12 hours(example), we can then build the index of the DELTA (changed) data for the next 11hours 59 minutes..15:39
dfamoratocedk bechamel, But with this15:40
dfamorato"15:40
bechamelcedk: delta in sphinx vocabulary is what we call the list of changed records15:41
cedkdfamorato: ok but why rebuild the index ?15:41
bechamelcedk: to play the role of the crawler15:42
bechamelcedk: but in one shot15:42
cedkbechamel: that's the delta update15:42
dfamoratocedk bechamel s/\"/ with this new proposed way, then we would need to have the tryton server make a table of all changes occoured in specific period15:42
bechameldfamorato: yes15:43
dfamoratocedk:  think of sphinx as a mysql instance/database15:43
cedkdfamorato: I know what is sphinx15:44
cedkbut rebuild the index on fix time without any reason is not good15:44
bechameldfamorato: but we don't care about the period: when a record is changed, his id is added to the table, when it is indexed, it gets removed from the table15:44
cedkon very large database, it could take a lot of time15:44
dfamoratobechamel: got it... when pull from db.. then pop from db15:44
dfamoratocedk:  yes, it could like a lot of time15:45
sharooncedk: bechamel: "we don't care about the period" -> that cannot be true.... you create a product, search for the product in search_rec_name to create a sale order and it wont be there15:45
dfamoratocedk: you need to rebuild indexes in the case you want to add new fields/attributes to be indexed15:45
cedkdfamorato: so don't rebuild index on fixed time, we just need to have an option to do it in case of trouble15:45
bechameldfamorato: yes like a queue actualy, but without adding a new depency15:45
cedkdfamorato: what ? you can not append to the index?15:46
bechamelcedk, dfamorato: actually nobody want to re-index all every time, I don't know why you talk about it15:46
bechamelcedk: yes this is what delta are for15:46
dfamoratocedk:  Yes, you can append to index... but if you want to add new "column15:46
cedkdfamorato: why new column ?15:46
dfamoratocedk: new "db column" to be indexed.. then you have to re-index data15:47
cedkdfamorato: the Model structure did not changed when you add a new record15:47
cedkdfamorato: which new column?15:47
dfamoratocedk: let's say in the future you say a module needs an extram column to be indexed... then you need to re-index data... But it will not occour frequently15:48
dfamoratocedk:  It would be an optional manual step to rebuild index from scratch15:48
bechamelsharoon: except if "hot records" are pushed rapidly15:48
cedkdfamorato: yes as I said15:48
bechamelcedk: so, if we are ok to use a table to put record ids that have changed, how to use it: 1) all the time 2) only when sphinx is down15:51
cedkbechamel: all the time15:53
dfamoratocedk bechamel : Just to make sure I understant corretcly. We are using a table instead of a MessageQueue (rabbit) because we don't want an extra dependency ?15:53
dfamoratocedk bechamel : Is it the intention to make full-text search the default implementation then ?15:53
dfamoratocedk bechame : Because in order to implement full text search.. we already have a dependency which is the Sphinx Server itself...15:54
bechameldfamorato: yes using rabbit just for this is a bit overkill15:54
dfamoratocedk bechamel : So, if someone want's the bennefit of the fulltext search, IMHO would not be that hard/nonsense15:55
bechameldfamorato: full text search should be an option15:55
dfamoratocedk bechamel : to use a message queue15:55
cedkdfamorato: sphinx will be an option in Tryton configuration15:57
cedkdfamorato: but we don't need of an other software to just storing the record to process15:57
cedkdfamorato: your sphinx script can do the job15:58
dfamoratocedk: ok.. got it15:58
bechameldfamorato: imo a message queue is overkill because 1) we are not doing multi-process communication 2) whe have already postgresql15:59
dfamoratobechamel: I understand15:59
cedkbechamel: but it would be good to have a long pulling wait to retrieve record from the table15:59
bechamelcedk: is it possible ?16:00
dfamoratocedk bechamel We can pull data in ranges and steps....16:00
dfamoratocedk bechamel: Sphinx allow us to tell how much rows to pull.... default is 102416:00
-!- alimon(~alimon@189.154.110.53) has joined #tryton16:01
bechamelI found this http://www.postgresql.org/docs/8.4/static/sql-notify.html16:01
bechamelbut "There is no NOTIFY statement in the SQL standard. "16:01
cedkbechamel: not standard16:01
bechamelACTION back in 2 min16:02
cedkwe must forget about postgresql, we must only use trytond16:03
bechamelcedk: so we must select the table and sleep in a loop, there a no way to "pull wait"16:06
bechamelcedk: or with a signal between threads ?16:08
dfamoratocedk bechamel : I think we can do something in sphinx16:09
dfamoratocedk bechamel : SPHINX DOC ( ranged query throttling, in milliseconds, optional, default is 0 which means no delay, enforces given delay before each query step)16:10
dfamoratocedk bechamel : Sorry.. that is for SQL data sources16:11
bechameldfamorato: yes16:11
bechameldfamorato: this makes no sense if we push data ourselves :)16:11
dfamoratobechamel: got it16:12
cedkI propose first implementation to be a simple loop with a sleep16:18
-!- zodman(~zodman@foresight/developer/zodman) has joined #tryton16:18
cedkafter that we could add a interprocess for communication if needed16:18
bechamelcedk: ok16:19
dfamoratocedk bechamel : So, the trytond new table and worflow. Should I implement that or bechamel implements ?16:19
bechamelnext decision, trigger vs write/create/delete overloading16:20
bechameldfamorato: I propose that you implement it, but I will help you to design it16:20
dfamoratobechamel: ok, great... so it's decided then....16:21
bechameldfamorato: have you already writen a tryton module ?16:21
cedkbechamel: no need for a module16:21
cedkeverything is in the base16:21
bechamelcedk: yes but it will looks like a module, it's just that it will be in another directory16:22
cedkbechamel: in ir/16:22
dfamoratobechamel: No, i have not written a tryton module16:22
bechamelcedk: ir/sphinx_search.py ?16:22
cedkbechamel: I don't think we need to named it with sphinx16:23
cedkthink generic16:23
bechamelcedk: so like with backend/postgresql backend/sqlite ?16:24
cedkbechamel: no16:24
cedkI think we don't need of a table in trytond16:25
cedkthe sphinx script could just look at each Model for the latest modified16:25
cedkand keep the timestmap per object16:26
cedkperhaps in a sqlite DB16:26
cedkor a pickled dict16:26
cedkdfamorato: did you understand?16:28
bechamelcedk: what about deleted records ??16:28
dfamoratocedk: sorry but i did not understand16:28
cedkbechamel: the search engind could remove them on the fly16:32
cedkand the crawler could also do it16:32
cedkdfamorato: could you search on id in sphinx ?16:32
dfamoratocedk: yes, you can search an ID16:33
dfamoratocedk: and each document id must be unique16:33
dfamoratocedk: so, basically i import the "postgres" id as of this moment16:33
cedkbechamel: so we could have a crawler that remove deleted record from the index16:33
bechamelcedk: so you mean: loop on all the sphinx index and test if the record still exist ?16:34
cedkbechamel: yes16:35
bechamelcedk: :/16:35
cedkbechamel: it is like the crawler idea16:37
cedkbechamel: and I think that the result of querying sphinx will be put in SQL clause: id in (..)16:38
bechamelcedk: ok, but it means that when we search we must check if ids returned by sphinx is still in the db16:38
bechamelcedk: ok16:38
cedkbechamel: will be done by postgres16:38
bechameldfamorato: still with us ?16:38
dfamoratobechamel: yes16:39
bechameldfamorato: you understand the idea ?16:39
bechamelso we are back to the idea of using an independant script, but instead of letting sphinx read the db we will push data to it with xml_pipe16:40
dfamoratodfamorato: we can put the document to be deleted on the "killlist"16:40
bechameldfamorato: yes16:41
dfamoratobechamel: which will remove the document from the index on the next indexing16:41
dfamoratobechamel: but still, where to store the data before the sphinx indexing/synchronization16:42
dfamoratobechamel: cedk said no need to a trytond table16:42
bechameldfamorato: deleting document will also work like a crawler, wich means walking trough the sphinx index and check if the record is still in tryton, if not: kill it16:43
dfamoratobechamel: sorry, this looks contradictory, trydond will push data to sphinx through xmlpipe2.... but on data deletion sphix needs to connect to tryton ?16:45
cedkdfamorato: it is not trytond to push data in sphinx but an independ script that will connect to trytond16:47
bechameldfamorato: no the same script that push data will also check sphinx content to see if there are stuff to delete (but I still don't know how to do it)16:48
dfamoratocedk: Can i use proteus to connect to tryton then ?16:48
cedkit could even be multi-threaded16:48
cedkdfamorato: no, you must import trytond16:48
bechameldfamorato: do you know if its possible to search by id range, E.G. search for all records whose ids are between 0 and 1000 ?16:49
cedkdfamorato: with protues it will generate too much connection etc.16:50
dfamoratobechamel: we can index by ranges.....16:50
dfamoratobechamel: not so sure if we can search by ranges16:50
bechameldfamorato: do you have a sphinx instance running ?16:52
bechameldfamorato: I see that one can do search query like "@name Joe"16:52
bechameldfamorato: so maybe "@id <1000"16:53
dfamoratobechamel: I have one.. but I need vpn to acces this server.....16:53
dfamoratobechamel: give couple minutes... I know we can select which search index to match16:54
bechameldfamorato: another solution is to search for the empty string and use limit and offset16:54
bechameldfamorato: no problem16:54
cedkbechamel: why do you want that?16:55
dfamorato bechamel: here is what can be queried from command line https://gist.github.com/108045716:55
bechamelcedk: to get the ids that are in sphinx in order to check if they are stil in postgres (and if not kill them)16:57
cedkbechamel: ok16:57
cedkbechamel: otherwise it can loop on all index entries16:58
bechamelcedk: yes but how to get them16:58
bechamelI found this http://sphinxsearch.com/docs/2.0.1/sphinxql-reference.html16:58
bechamelthis allow to query sphinx with an sql syntax16:59
dfamoratobechamel: Yes, we can do that17:01
dfamoratobechamel: But then, we need to write sql syntax on this query, make sphinx search server listen on that port as well17:02
dfamoratobechamel: An the sphinx_python API do not use the SphinxQL language17:02
dfamoratobechamel: So, not all functional accessible on SphinxQL are accessible by the python API17:03
bechameldfamorato: anyway, if the python api offer a way to search by id, it's also good17:05
bechameldfamorato: actually any methods that allow us consistently to walk trough all the sphinx records is ok17:06
dfamoratobechamel: SPHINX DOC = Query(self, query, index='*', comment='') method of sphinxapi.SphinxClient instance17:07
dfamoratobechamel: i am trying to query by id.... could figure out a way yet17:08
bechameldfamorato: I saw "@name Joe"  type of queries here http://sphinxsearch.com/docs/2.0.1/extended-syntax.html17:10
dfamoratobechamel: maybe we will have to store the id as id and also as string_filed17:12
dfamoratobechamel: in order to be matched17:13
dfamoratobechamel: i'm on a call, will be back in 5 min17:13
-!- helmor(~helmo@2.209.26.248) has joined #tryton17:14
bechameldfamorato: actually is it possible to connect and query sphinx with the mysql client (and so use sphinxQL)17:17
dfamoratobechamel: yes, yes it is17:18
bechameldfamorato: so, quick recap: the script will create two threads:17:20
bechamelone that will crawl the ids in sphinx, check if they are still in tryton, and delete them from sphinx if they are no more in tryton17:21
bechamelanother that will check for new record in tryton (newer that the last record seen before), push them in sphinx and store the datetime of the last record he pushed in a small file/sqlite db17:24
cedkbechamel: + 1 thread that will crawl any Tryton's record17:32
bechamelcedk: oh yes I forgot that one17:36
-!- gremly(~gremly@200.106.202.91) has joined #tryton17:39
dfamoratobechamel: sorry for the daly, just finished my call... had to take it17:43
dfamoratobechamel: so, I got it... will will come up with a draft and if I have any question, I will let you guys know17:43
cedkdfamorato: release early, release often :-)17:44
dfamoratocedk: Yes, it's a bad practice (or not) that i have to not release code that does not glue properly...17:45
-!- woakas(~woakas@200.106.202.91) has joined #tryton17:45
cedkdfamorato: by release I mean a codereview for us17:46
dfamoratocedk: yes, I got that... didn't push any code to codereview yet....17:47
dfamoratocedk: afraid of the coments :)17:47
bechamelcedk, dfamorato: I discovered that with github it's possible to leave comment on commits, nice feature17:47
dfamoratocedk: and the lack of tests17:47
dfamoratobechamel: it is possible... also, i can edit code directly on github.... can create a branch of  the project, work on new features and then merge back to core17:48
dfamoratobechamel: Github is awesome.... and it works seamlessly with mercurial as well...17:49
dfamoratobechamel: You can clone my project as a mercurial project.. contribute to it and then make a pull request to me... http://hg-git.github.com/17:50
bechameldfamorato: IMO it's better to do the codereview with the usual tool in order to centralize stuff17:52
bechameldfamorato: but using githug to store your repo is perfectly ok17:53
dfamoratodfamorato: I understand. It is not my plan to change the workflow  of the standard module developement in tryton. I am just advocating Gihub for your future personal projects non-tryton related =D17:54
-!- sharoon(~sharoon@2001:470:5:630:e2f8:47ff:fe22:f228) has joined #tryton17:55
dfamoratobechamel: Well, if that is all then, I will go for lunch... Now finals are over, so I should be on IRC everyday17:56
bechameldfamorato: ok, enjoy your meal :)18:05
bechamelACTION leave office, bbl18:05
-!- vladimirek(~vladimire@adsl-dyn24.78-98-14.t-com.sk) has joined #tryton18:14
-!- pjstevns(~pjstevns@helpoort.xs4all.nl) has joined #tryton19:51
-!- pjstevns(~pjstevns@helpoort.xs4all.nl) has left #tryton19:54
-!- plantian(~ian@c-67-169-72-36.hsd1.ca.comcast.net) has joined #tryton19:59
-!- chrue(~chrue@host-091-097-191-037.ewe-ip-backbone.de) has joined #tryton20:00
reichlichis anywhere some documentation about the workflow model?20:01
cedkreichlich: nope, but you can have a look at the OE one, it should be still almost valid20:03
reichlichcedk, great20:06
-!- bvillasanti(~bruno@186-129-248-247.static.speedy.com.ar) has joined #tryton20:07
-!- sharoon(~sharoon@2001:470:5:630:e2f8:47ff:fe22:f228) has joined #tryton20:15
sharooncedk: your mail says "Sometimes we indent with 4 spaces and other times with 4 spaces, I think this is wrong."20:17
sharooncedk: i did not understand20:17
cedksharoon: s/4 spaces/8 spaces/20:21
sharooncedk: ok, so its just a typo20:21
-!- sharoon(~sharoon@2001:470:5:630:e2f8:47ff:fe22:f228) has left #tryton20:32
-!- bechamel(~user@host-85-201-144-79.brutele.be) has joined #tryton20:43
-!- chrue1(~chrue@dyndsl-091-096-014-184.ewe-ip-backbone.de) has joined #tryton20:57
-!- ccomb1(~ccomb@94.122.99.232) has joined #tryton21:02
-!- ecarreras(~under@unaffiliated/ecarreras) has joined #tryton21:47
-!- nicoe(~nicoe@146.81-247-81.adsl-dyn.isp.belgacom.be) has joined #tryton22:17
-!- mhi(~mhi@p54894D47.dip.t-dialin.net) has joined #tryton22:23
-!- elbenfreund1(~elbenfreu@p54B95832.dip.t-dialin.net) has joined #tryton23:09
-!- dfamorato(~dfamorato@173-9-190-185-miami.txt.hfc.comcastbusiness.net) has joined #tryton23:16
-!- bvillasanti(~bruno@186-129-248-247.static.speedy.com.ar) has left #tryton23:26

Generated by irclog2html.py 2.11.0 by Marius Gedminas - find it at mg.pov.lt!