星期日, 12月 31, 2006

xml

XML::Simple
ok

my $outstring="";
$outstring.="";
print $outstring;

amazon online reader
while(($row = $sth->fetchrow_arrayref)){
$outstring.="
  • [3]/ref=sib_dp_pt/002-7879865-0184864#reader-link\">[3].01.THUMBZZZ.jpg\" alt=\"館藏封面圖片\">[0]\">Title:$row->[1]" . "
    Author:" . $row->[2] . "
  • "; # $outstring.="\n\n$row->[0] results";
    }

    $outstring.="";
    print $outstring;

    星期五, 12月 29, 2006

    星期三, 12月 27, 2006

    目前工作現況

    目前工作現況

    1.完成 npl opac zh_TW,zh_CN,jp_JP,kr_KR po
    zh_TW http://farm1.static.flickr.com/160/334740307_03ce290dd8_o.png
    zh_CN http://farm1.static.flickr.com/143/334740309_91defd939e_o.png
    jp_JP http://farm1.static.flickr.com/161/334740311_fa95a5ede4_o.png
    kr_KR http://farm1.static.flickr.com/139/334740312_c50652e0df_o.png
    jp,kr 使用翻譯軟體處理
    demo site:zh_TW http://lit184.lacc.fju.edu.tw
    :zh_CN http://lit184.lacc.fju.edu.tw:11000

    2.完成 npl intranet zh_TW,zh_CN po
    zh_TW http://farm1.static.flickr.com/149/334740316_7ca93ed2f5_o.png
    zh_CN http://farm1.static.flickr.com/126/334740314_a822afd265_o.png

    3.完成 Amazon review
    Amazon review
    http://farm1.static.flickr.com/155/334744524_3d18d363c0_o.png
    http://farm1.static.flickr.com/128/334744530_5726672ea5_o.png
    http://farm1.static.flickr.com/139/334744532_8f60596271_o.png

    demo site: http://lit184.lacc.fju.edu.tw 書評因為網路問題無法取得,image 可看到

    4.進行 ajax search
    ajax php http://140.136.81.145:9999/ajax/
    ajax perl http://140.136.81.145:10000/perl/ajax.pl

    livesearch http://farm1.static.flickr.com/123/334744534_da6bdcc846_o.png
    testlivesearch http://farm1.static.flickr.com/142/334744536_3b19025807_o.png
    thomas & me test livesearch currently.
    ref1 http://koha.wikispaces.com/kohadebug#7
    ref2 zebra http://wiki.koha.org/doku.php?id=zebraprogrammerguide
    ref2 zebra http://www.indexdata.dk/zebra/
    ref3 zebea http://www.kohadocs.org/Installing_Zebra_plugin.html

    目前 koha 可採用 zebra server query catalog(Z3950 server)
    like 1 http://farm1.static.flickr.com/161/334744538_6265736340_o.png
    like 2 http://farm1.static.flickr.com/126/334747872_a4904c351d_o.png
    目前支援 MARC21,UNIMARC,email, XML, MARC..

    6.Koha 支援 utf8
    http://dev.mysql.com/doc/refman/4.1/en/localization.html

    perl -V

    perl -V

    Compiled at Dec 16 2005 07:48:39
    @INC:
    /etc/perl
    /usr/local/lib/perl/5.8.7
    /usr/local/share/perl/5.8.7
    /usr/lib/perl5
    /usr/share/perl5
    /usr/lib/perl/5.8
    /usr/share/perl/5.8
    /usr/local/lib/site_perl
    .

    perldoc perllocal
    #!/usr/local/bin/perl

    use ExtUtils::Installed;
    my $instmod = ExtUtils::Installed->new();
    foreach my $module ($instmod->modules()) {
    my $version = $instmod->version($module) || "???";
    print "$module -- $version\n";
    }


    * pmpath - show the module's full path
    * pmvers - get a module version number
    * pmdesc - get a module description
    * pmall - get all installed modules pmdesc descriptions
    * pmdirs - print the perl module path, newline separated
    * plxload - show what files a given program loads at compile time
    * pmload - show what files a given module loads at compile time
    * pmexp - show a module's exports
    * pminst - find what's installed
    * pmeth - list a class's methods, recursively
    * pmls - long list the module path
    * pmcat - cat the module source through your pager
    * pman - show the module's pod docs
    * pmfunc - show a function source code from a module
    * podgrep - grep in pods of a file
    * pfcat - show pods from perlfunc
    * podtoc - list table of contents of a podpage
    * podpath - show full path of pod file
    * pods - list all standard pods and module pods
    * sitepods - list only pods in site_perl directories
    * basepods - list only normal "man-page" style pods
    * faqpods - list only faq pods
    * modpods - all module pods, including site_perl ones
    * stdpods - list standard pods, not site_perl ones

    星期一, 12月 25, 2006

    mysql4tomysql5 (UTF8)

    from mysql
    http://dev.mysql.com/doc/refman/4.1/en/localization.html

    from thomas

    mysql4tomysql5 (UTF8)

    以下是我剛好有需要 mysql4 dump出來的資料 import到 mysql5 的過程

    先備份資料庫

    mysqldump -u root -p --default-character-set=latin1 Koha >output.sql

    piconv -f utf8 -t big5 output.sql > big5.sql

    piconv -f big5 -t utf8 big5.sql >utf8.sql



    升級mysql

    apt-get install mysql-server-5.0 mysql-common mysql-client-5.0
    MySQL的my.cnf設定檔內要加入以下設定

    [client]
    default-character-set=utf8
    [mysqld]
    init_connect= 'SET NAMES utf8'
    default-character-set=utf8
    default-collation=utf8_general_ci

    建立Koha資料庫

    CREATE DATABASE `Koha` DEFAULT CHARACTER SET utf8 COLLATE utf8_general_ci;

    修改sql檔,在最前面加上

    SET NAMES utf8;
    SET CHARACTER_SET_CLIENT=utf8;
    SET CHARACTER_SET_RESULTS=utf8;

    再來把每個資料表後面的

    TYPE=MyISAM;

    改成

    ENGINE=MyISAM DEFAULT CHARSET=utf8;

    都改好後就可以把他import進去了

    #mysql -u帳號 -p 資料庫 < utf8.sql



    可參考http://blog.leolo.cc/2006/02/06/134/



    以上完成之後用phpmyadmin已經可以看到中文!

    但目前koha這邊還沒有修改程式成適用utf8

    所以my.cnf 加入了 init_connect= 'SET NAMES utf8'這樣會影響其他舊有的程式

    此外

    KOHA 在 mysql5 使用 left join 的語法似乎有問題

    無法辨識LEFT JOIN table1 ON TABLE1.column=TABLE2.column 造成error

    [Tue Jul 04 19:10:35 2006] [error] [client 127.0.0.1] DBD::mysql::st fetchrow failed: fetch() without execute() at /opt/koha/intranet/modules/C4/SearchMarc.pm line 334., referer: http://127.0.0.1:8080/cgi-bin/koha/members/member.pl

    原因是

    SELECT biblio.biblionumber as bn,biblioitems.*,biblio.*, marc_biblio.bibid,itemtypes.notforloan,itemtypes.description
    FROM biblio, marc_biblio
    LEFT JOIN biblioitems on biblio.biblionumber = biblioitems.biblionumber
    LEFT JOIN itemtypes on itemtypes.itemtype = biblioitems.itemtype
    WHERE biblio.biblionumber = marc_biblio.biblionumber AND bibid = ?



    無法辨識 biblio.biblionumber 但是改成 'biblio.biblionumber' 就可以了!

    但是用到left join 或是類似的join有好幾隻程式~~這樣有需要改嗎???

    且應該還有更多mysql5 utf8的問題有帶發掘與解決

    ps
    已經解決尚未測試FROM Kochin Chang
    Dear Thomas,

    I just installed Chinese Koha on Ubuntu 6.06 which comes with MySQL
    5.0.x. Your earlier message about your experiences with installing Koha
    with MySQL 5.0.x helped a lot. In your message you mentioned

    但目前koha這邊還沒有修改程式成適用utf8
    所以my.cnf 加入了 init_connect= 'SET NAMES
    utf8'這樣會影響其他舊有的程式

    To get around this problem, I modified intranet/modules/C4/Context.pm
    so that the procedure sub dbh now becomes

    sub dbh
    {
    my $self = shift;
    my $sth;

    if (defined($context->{"dbh"})) {
    $sth=$context->{"dbh"}-
    >prepare("select 1");
    return $context->{"dbh"} if (defined($sth->execute));
    }

    # No database handle or it died . Create one.
    $context->{"dbh"} = &_new_dbh();
    # Make sure UTF-8 charset is used for connection and results.
    if (defined($context->{'dbh'})) {
    $sth = $context->{'dbh'}->prepare('SET NAMES utf8');
    $sth->execute;
    }

    return $context->{"dbh"};
    }

    The idea is to send the 'SET NAMES utf8' string over the newly created
    connection to MySQL server. With this you don't have to put 'SET NAMES
    utf8' in the my.cnf. I have tested my modification on my Koha
    installation. So far it works like a charm.

    Regards,
    Kochin Chang

    星期日, 12月 24, 2006

    koha database

    items
    holdingbranch
    homebranch

    255

    itemtype
    itemtypes

    255

    星期五, 12月 22, 2006

    ubuntu 6.06.1

    apt
    apt-get install mysql-client-4.1 mysql-server-4.1 php4 libapache2-mod-auth-mysql php4-mysql phpmyadmin libxml2-dev libssl-dev libyaz libyaz-dev yaz yaz-doc libwrap-dev libdate-manip-perl libhtml-template-perl libmail-sendmail-perl make gcc lynx wget ncftp unzip libmysqlclient14
    libmysqlclient14-dev

    cpan
    cpan -i Bundle::KohaSupport Unicode::String Encode::HanExtra XML::SAX Class::Accessor Business::ISBN Net::LDAP PDF::API2 MARC::Record MARC::File::XML PDF::API2 Compress::Zlib PDF::Reuse PDF::Report PDF::Create PDF::Labels Acme::Comment GD::Barcode Data::Random

    HTML::Template::Pro XML::Parser XML::SAX::Expat XML::LibXML XML::Simple PDF::Reuse::Barcode MARC::Crosswalk::DublinCore LWP::Simple Date::Calc GD::Barcode

    apt-get install libxslt1-dev libgcrypt11-dev libgpg-error-dev
    http://ftp.indexdata.dk/pub/yaz/ubuntu/DrapperDrake/
    Net::Z3950::ZOOM

    http://www.nabble.com/Koha-f14380.html

    speedy
    cpan -i CGI::SpeedyCGI
    apt-get install speedy-cgi-perl

    星期三, 12月 20, 2006

    speed koha

    cpan -i CGI::SpeedyCGI
    apt-get install speedy-cgi-perl

    星期三, 12月 13, 2006

    mysql:DBD ubuntu

    libmysqlclient14
    libmysqlclient14-dev

    星期日, 12月 10, 2006

    test note

    modify koha 2.2.7 & koha rel_2_2
    add File::Find (perl 5.8.8)
    file.pl

    #!/usr/bin/perl

    use File::Find;

    find(\&wanted,'/usr/local/koha/opac/cgi-bin');

    sub wanted{
    if(/.pl/){
    print $File::Find::name . "\n";
    system("/bin/sed -f /usr/local/koha/opac/cgi-bin/insert.text $File::Find::name > $File::Find::name.t");
    system("/usr/bin/rm -rf $File::Find::name");
    # system("/usr/bin/rename '\s/.t$//' *.t");
    }
    # system("/usr/bin/rename '\s/$File::Find::name$//' *.t");
    }

    insert.text

    /#!\/usr\/bin\/perl/c#!/usr/bin/perl use lib '/usr/local/koha/intranet/modules';


    chmod -R 0755 tmp
    chown -R apache owner

    星期四, 12月 07, 2006

    koha 2.2.7

    koha 2.2.7
    # mark orgin then add two lines
    SetEnvIf Request_URI "\.pls" PERL5LIB "/usr/local/koha/intranet/modules"
    SetEnvIf Request_URI "\.pls" KOHA_CONF "/etc/koha.conf"
    Opac
    #/cgi-bin/opac-main.pl add
    use lib '/usr/local/koha/intranet/modules';

    星期四, 9月 28, 2006

    koha cvs

    取得目前 koha release codes
    cvs -z3 -d:pserver:anonymous@cvs.savannah.nongnu.org:/sources/koha co koha


    取得最新版本 export release version
    cvs -z3 -d:pserver:anonymous@cvs.savannah.nongnu.org:/sources/koha export -r dev_week koha

    koha cvs
    rel_2_2
    rel_3_0
    HEAD(latest)

    星期三, 9月 27, 2006

    koha 資料

    wiki
    1.http://wiki.liblime.com/doku.php?do=index&id=koha24rmnotes

    2.https://www.athenscounty.lib.oh.us/wiki/doku.php?do=index&id=catalogingproject

    3.http://wiki.koha.org/doku.php?do=index&id=devweek

    seriers User Guide
    http://www.katipo.co.nz/solutions/koha/docs/CorpSerialsModuleUserGuide.pdf

    koha blog koha::ajax
    http://blog.katipo.co.nz/?p=6#more-6

    Nelsonville Public Library
    http://zoomopac.liblime.com/cgi-bin/koha/opac-main.pl

    Gussying up OpenSearch
    http://dilettantes.blogspot.com/2005/06/gussying-up-opensearch.html

    ZAP! Advanced Search
    http://liblime.com/zap/advanced.html

    A9 OpenSearch
    http://opensearch.a9.com/

    WWW::OpenSearch - Search A9 OpenSearch compatible engines
    http://search.cpan.org/src/BRICAS/WWW-OpenSearch-0.08/README

    Unimarc, marc21, Unicode, and MARC::File::XML
    http://www.mail-archive.com/perl4lib@perl.org/msg00950.html

    Open Source Software
    http://www.libsuccess.org/index.php?title=Open_Source_Software

    Koha Analytical Feature List and
    Documentation Index with
    Demonstration and Example Implementation Links
    http://www.agogme.com/koha_org/features/

    open cataloger
    http://team42.liblime.com/
    http://teamipg.liblime.com/nls.html

    barcode

    cp 145 barcodedata

    /usr/local/share/perl/5.8.7/PDF/API2/Basic/TTF
    PDF/API2/Resource/XObject/Form

    dev co Serials.pm Letters.pm

    Subroutine PDF::API2::Resource::Font::Postscript::O_RDONLY redefined at /usr/share/perl/5.8/Exporter.pm line 65.
    at /usr/local/share/perl/5.8.7/PDF/API2/Resource/Font/Postscript.pm line 46


    old barcode
    modify countryCodes.dat => 英文
    158 = 台灣

    create_labels_conf_table.sql
    C4/
    PDF::API2
    Compress::Zlib
    PDF::Reuse
    PDF::Report
    PDF::Create
    PDF::Labels
    Acme::Comment
    GD::Barcode
    Data::Random
    admin-home.tmpl
    label-topmenu.inc
    koha/ owner www-data

    error

    1.barcodesGenerator.pl

    DBD::mysql::st execute failed: You have an error in your SQL syntax. Check the manual that corresponds to your

    MySQL server version for the right syntax to use near 'AND I.barcode <= ) AND (I.barcode <> 'FALTA') ORDER

    BY Codigo' at ./barcodesGenerator.pl line 215.
    DBD::mysql::st fetchrow_array failed: fetch() without execute() at ./barcodesGenerator.pl line 233.
    Can't call method "is_obj" on an undefined value at /usr/local/share/perl/5.8.7/PDF/API2/Basic/PDF/File.pm line 986.

    2.test.textblock.pl
    Can't locate object method "text_block" via package "PDF::API2::Basic::TTF::Table" at ./test.textblock.pl line 17.

    星期二, 9月 26, 2006

    perl use lib

    use lib '/usr/local/koha/intranet/modules';

    星期一, 9月 11, 2006

    koha 佈景程式

    intranet 佈景

    1.修正 cgi-bin/*.pl

    $tabsysprefs{intranetstylesheet}="Interanet"
    $tabsysprefs{intranetcolorstylesheet}="Interanet"

    2.修正 system /cgi-bin/admin/systempreferences.pl
    $template->param(
    intranetcolorstylesheet => C4::Context->preference("intranetcolorstylesheet"),
    intranetstylesheet => C4::Context->preference("intranetstylesheet"),
    );

    傳回參數

    3.修正 /intranet/htdocs/intranet-tmpl/npl/zh_TW/includes/doc-head-close*.inc
    相關呼叫 intranetcolorstylesheet、intranetstylesheet












    OPAC 佈景

    1.修正 cgi-bin/*.pl

    $tabsysprefs{opacstylesheet}="OPAC"
    $tabsysprefs{opaccolorstylesheet}="OPAC"
    $tabsysprefs{opaclayoutstylesheet}="OPAC"

    2.修正 system /cgi-bin/*.pl
    $template->param(
    opacstylesheet => C4::Context->preference("opacstylesheet"),
    opaccolorstylesheet => C4::Context->preference("opaccolorstylesheet"),
    opaclayoutstylesheet => C4::Context->preference("opaclayoutstylesheet"),
    );

    傳回參數

    3.修正 /opac/htdocs/opac-tmpl/npl/zh_TW/includes/doc-head-close*.inc
    相關呼叫 opacstylesheet、opaccolorstylesheet、opaclayoutstylesheet








    星期五, 8月 18, 2006

    MARC::SAX

    cpan -i XML::LibXML LWP::Simple XML::Simple

    http://www.nntp.perl.org/group/perl.perl4lib/2369

    Just providing an update on this issue. As you may
    recall, I'vebeen putting the MARC::Record suite,
    specifically MARC::File::XMLand MARC::Charset, through
    some fairly rigourous tests, includinga 'roundtrip'
    test, which converts the binary MARC-8 records to
    MARCXML / UTF-8 and then back to binary MARC but
    encoded as UTF-8.This test is available here:

    http://liblime.com/public/roundtrip.pl


    I discovered a number of bugs or issues, not in the
    MARC::* stuff, but in theback-end SAX parsers. I'll
    just summarize my discoveries here for posterity:

    1. MARC::File::XML, if it encounteres unmapped
    encoding in aMARC-8 encoded binary MARC file (in
    as_xml()) will drop the entire subfield where the
    improper encoding exists. The simple solution isto
    always use: MARC::Charset->ignore_errors(1); if you
    expect your
    data will have improper encoding.

    2. the XML::SAX::PurePerl parser cannot properly
    handle combiningcharacters. I've reported this bug
    here:

    http://rt.cpan.org/Public/Bug/Display.html?id=19543


    At the suggestion of several, I tried replacing my
    default systemparser with expat, which cause another
    problem:

    3. handing valid UTF-8 encoded XML to new_from_xml()
    sometimes causes the entire record to be destroyed
    when using XML::SAX::Expat as the parser (with
    PurePerl these seem to work). It fails with the error:

    not well-formed (invalid token) at line 23, column 43,
    byte 937 at /usr/lib/perl5/XML/Parser.pm line 187

    I haven't been able to track the cause of this bug, I
    eventually found a workaround that didn't result in
    the above error, but instead,silently mangled the
    resulting binary MARC record on the way out:

    4. Using incompatible version of XML::SAX::LibXML and
    libxml2 will cause binary MARC records to be mangled
    when passed through new_from_xml() in some cases. The
    solution here is to make sure you're running
    compatible versions of XML::SAX::LibXML and libxml2. I
    run Debian Sarge and when I just used the package
    maintainer's versions it fixed the bug. It's unclear
    to me why the binary MARC would be mangled, this may
    indicate a problem with MARC::* but I haven't
    had time to track it down and since installing
    compatible versions of the parser back-end solves the
    problem I can only assume it's the fault of the
    incompatible parsers.

    Issues #3 and #4 above can be replicated following
    batch of records through the roundtrip.pl script
    above:

    http://liblime.com/public/several.mrc

    If you want to test #2, try running this record
    through roundtrip.pl:

    http://liblime.com/public/combiningchar.mrc

    BTW: you can change your default SAX parser by editing
    the .ini file ... mine is located in
    /usr/local/share/perl/5.8.4/XML/SAX/ParserDetails.ini

    So the bottom line is, if you want to use
    MARC::File::XML in any serious application, you've got
    to use compatible versions of the
    libxml2 parser and XML::SAX::LibXML. Check the README
    in the perl package for documentation on which are
    compatible...

    Maybe a note somewhere in the MARC::File::XML
    documentation to point these issues out would be
    useful. Also, it wouldn't be too bad to have
    a few tests to make sure that the system's default SAX
    parser is capable of handling these cases. Just my two
    cents.

    星期六, 8月 05, 2006

    koha 2.2.6RC2

    ... available for download at :
    http://download.savannah.nongnu.org/releases/koha/

    WARNING : this is a Release Candidate, so, if you decide to install it,
    you must know that :
    - some bugs may remain
    - you MUST backup your database before updating
    - you must know how to roll-back to previous version if things goes
    wrong.

    Otherwise, don't install it and wait for official 2.2.6 !

    Release notes :
    ***************************************
    RELEASE NOTES
    =============

    Koha is the first Open-Source Integrated Library System.
    Released in New Zealand, in January 2000, it is maintained by a team of
    volunteers from around the globe. The Koha system is a full catalogue,
    opac, circulation and acquisitions system.

    Koha 2.2.6 is more than 247 000 lines of code, developed by more than
    30
    differents developers (and even more translators).

    With the 2.2.6 version, Koha is now a mature product, with a lot of
    nice
    features. It's used in more than 100 libraries, of all kinds (public,
    schools, religious...), of different sizes (from 1 to 8 branches, from
    1000 to 300 000 items).


    ========
    WARNINGS
    ========
    * The librarian interface is tested only with Mozilla/Firefox. Should
    work (partially) with IE. OPAC should work fine with any browser.
    * In this release, french & english are uptodate. Other language po
    files are provided in the package. If you update/complete one of them,
    please send it to paul.poulain @ free.fr, it will be included in a
    2.2.6b release !
    * NEW PACKAGES REQUIRED :
    - Class::Accessor (0.25)
    - XML::SAX (0.14)
    - MARC::File::XML (0.83),
    - MARC::Charset (0.95),
    - LWP::Simple
    - XML::Simple


    KNOWN BUGS
    **********
    Critical/blocking : none
    see bugs.koha.org for more information. Please report any problem in
    bugs.koha.org

    Apple users : it seems there is an annoying bug in Firefox, that you
    can
    have in librarian biblio search: only the first MARC field list is
    available and the others can't be opened. The problem is described and
    fixed at http://www.macfixit.com/article.php?story=20051226091140372

    =======
    CHANGES
    =======
    Changes 2.2.5 => 2.2.6
    **********************

    DB CHANGES
    **********

    DB structure : dewey field has been moved to a char(30)
    DB content : many many new systempreferences
    Koha opac stylesheets : none

    BUGFIXES
    === CATALOGUING ===
    * result page display was incorrect when searching a biblio before
    adding a new one.
    * no more saving of an empty field in the DB
    * Fix for splitting up fixed fields containing | in them causing
    incorrect storage of fixed fields in Koha DB.
    * An item that is on loan can't be deleted anymore.

    === CIRCULATION ===
    * minor bugfix : Adding 0 before days and months in order to make date
    comparison work.

    === PARAMETERS ===
    * adding page facility in book funds.


    IMPROVEMENTS
    Almost nothing has changed in the DB, so the stability should be good.
    But you are warned that some/few bugs may occur.
    Please READ the release notes. Everything is compatible with previous
    versions, so, if you just install Koha, you should have the same
    behaviour as previously, but you will miss many interesting features!

    === ACQUISITION ===
    * added a (crontab) script : check_suggestion.pl, that send a mail to
    the librarian when a suggestion is pending
    * *MAJOR IMPROVEMENT* :
    - online help has been added
    - the order.pl page doesn't show baskets closed for more than 6 months
    anymore.
    - when recieving an order, you now see the list of existing parcels.
    The
    parcel code used to be the "Supplier invoice information". Now when you
    go to the reciept page, you see all previous recieves, and can create a
    new parcel, exactly as before. This does not require any change in Koha
    internals, but we hope it's clearer than before.
    - when recieving an order in a parcel, you now see all orders still
    waiting for a receipt. Thus, you can enter isbn/title as previously, or
    directly click on a title in the list.
    We worked hard on those new acquisition screens, and hope that you'll
    find it easier to use than before.

    === AUTHORITIES ===
    * if the library doesn't define a summary for an authority, a
    standard/default one will be built automatically.
    * if the librarian set * as subfield, all subfields will be shown,
    without any separator, but respecting the order. Previously, you had to
    define every subfield, causing problems in case of reordered subfields.

    === CIRCULATION===
    * added a (crontab) script : overduenoticesSelect.pl, that can be used
    as replacement for the overduenotice script. This one has many more
    parameters, including a filter on branches and an external letter (to
    avoid the need to modify the script on each release)
    * merged today & previous issues list during issuing. Added a button to
    renew issues & return issues in 1 click

    === CATALOGUE ===
    * highly improved MARC21 default framework
    * new systempref "sortbynonfiling". If you catalogue with complete MARC
    (including indicators), you can set this pref to Yes. It will remove
    non
    filing chars and order "The King Henry IV" to letter "K". This
    systempref is USELESS for UNIMARC (as UNIMARC doesn't manage non-filing
    chars in an indicator, unlike MARC21)

    === CATALOGUING ===
    * added a (crontab) script : delete_authority.pl, that deletes all
    entries of an authority in a biblio.
    * *MAJOR IMPROVEMENT* in MARC editing :
    - duplicating a field (+ on the tag) does not require a server call
    anymore : it's much faster.
    - duplicating a subfield is no longer done with a pipe (|), but by
    clicking on the + facing the subfield
    - the "hidden" property of the framework now has 19 possible values.
    Look in the online help to see them. The immediate interesting value is
    -1, which means the subfield is minimized on the MARC editor and you
    must click on it to expand it. Rarely used fields/subfields are easier
    to manage.
    - a advancedMARCeditor systempref has been added. If set, tag/subfield
    values are not shown, except when putting the mouse on the
    tag/subfield.
    This systempref is only for "MARC addicts"!
    * MARC21 : many plugins have been added (by joshua). It seems you now
    have as many plugins for fixed length fields in MARC21 as in UNIMARC!
    Note that plugins help you filling a tag/subfield in the MARC editor.

    === MEMBERS ===
    * patron images management. A new systempreference has been added
    (patronimages, in borrowers section). If set (to an image extension,
    like jpg or gif), the image will be shown on patron detail or
    circulation page. The images must be stored in
    KOHADIR/htdocs/intranet-tmpl/patronimages/ . The image must be named
    . (1.jpg for example, for
    borrower #1)

    === OPAC ===
    Many preferences have been added to OPAC. Read them carefully, as you
    will be able to modify highly the look and behaviour of your OPAC!
    * new default stylesheet for css (purple on white). A new stylesheet
    (contributed by liblime.com) has been added to show everybody the
    changes in OPAC. If you want to stay with the old one (dark/light
    green), just reach Koha >> Parameters >> Systempreferences >> OPAC >>
    opacstylesheet > modify > /opac-tmpl/default/opac_old.css
    * Amazon content : if the new AmazonContent systempref is set AND an
    AmazonDevKey/AmazoneDevTag (available on www.amazon.com) is entered in
    systempreferences AND XML::Simple package is installed, OPAC will show
    amazon cover pages when available. Note that this feature may not be
    legal everywhere. For example, it seems it may not be legal in France
    (you must ask each editor for permission)
    * new systempref : opaccredit. If something (including html) is stored
    there, the footer of OPAC pages will contain what is there (instead of
    the "Library powered by Koha..." that is on css default templates)
    * new systempref : opacnav. In this preference, you can put any html
    that will be added to your menu. For example, to add a button to reach
    the koha website : title="Koha world website">Koha
    * new systempref : opacbookbag : if set to 0, the basket/book bag
    features will be hidden in OPAC (even for logged in users).
    * authorities heading search : a new menu has been added so the user
    can
    search in authorities (same behaviour as in intranet)
    * new systempref : opaclanguagesdisplay. If set to "yes" (default
    value), the list of available language is shown. Otherwise, there is no
    list, koha will stay in default language only.
    * new systempref : opacreadinghistory. If set to "yes" (default value),
    the reading history will be available on OPAC (for logged in users). If
    set to "no", the reading history never appears.
    * Recent Acquisition : added a new page to filter recent acquisitions
    on
    a given branch.
    * if you use npl templates, you can have an alternate stylesheet (to
    set
    it, go to librarian interface >> Parameters > Systempreferences > OPAC
    >
    opacstylesheet > modify > /opac-tmpl/npl/en/includes/opac.liblime.css)
    * when there is more than 1 item at a given location
    (branch/callnumber/location), the list shows the number of items, not 1
    line for each (same) value.

    === REPORTS ===
    * Inventory : you can now scan in a text file, on an unconnected
    laptop,
    with every book in your library. Then, just upload the file in Koha. It
    will "mark as seen" all books. Once you've scanned everything in your
    library, just query for all books not seen since the beginning of the
    inventory, and you'll get "lost" books.

    === SERIALS ===
    * MAJOR IMPROVEMENT : Adding item creation on the fly when receiving
    serial : if you activate systempref > Catalogue > serialsadditems,
    you'll have a form to create an item on the fly at the end of the
    serial
    receipt form. The item contains the callnumber, the branch, the barcode
    and the location (where applicable). Go to the catalogue after setting
    a
    serial to "arrived", and you should see the item created. If you let
    serialsadditems to OFF, behaviour is unchanged from previous releases.
    * the last 5 arrived serials are shown when receiving serials.

    星期五, 7月 21, 2006

    星期日, 7月 16, 2006

    Investigations on Perl, MySQL & UTF-8

    http://lists.gnu.org/archive/html/koha-devel/2006-03/msg00027.html

    Because the story of Perl, MySQL, UTF-8 and Koha is becoming more and
    more complicated, I've decided to start my tests outside of Koha or any
    web server. I wanted to check that Perl and MySQL could communicate
    with UTF-8 data.

    What I did :

    1. copy some UTF-8 strings from
    http://www.columbia.edu/kermit/utf8-t1.html paste into a UTF-8 text
    file utf8.txt (open/past in UTF-8 console, with Vim having :set
    encoding=utf-8)

    2. create a UTF-8 database with a simple table having a TEXT field

    $ mysql --user=root --password=xxx
    mysql> CREATE DATABASE `utf8_test` CHARACTER SET utf8;
    mysql> connect utf8_test
    mysql> create table strings (id int, value text);
    mysql> quit

    (no need to set connection character set to utf-8 in that case, default
    latin1 is fine)

    Note: my MySQL server is latin1...

    $ mysql --user=root --password=xxx utf8_test
    mysql> status
    Server characterset: latin1
    Db characterset: utf8
    Client characterset: latin1
    Conn. characterset: latin1
    mysql> set names 'UTF8';
    mysql> status
    Server characterset: latin1
    Db characterset: utf8
    Client characterset: utf8
    Conn. characterset: utf8

    3. write and execute a Perl script which reads the UTF-8 text file,
    insert UTF-8 strings into database, retrieve UTF-8 strings from
    database, print UTF-8 strings to STDOUT. See details in attached file
    readfile_insertdb.pl. Important note: "set names 'UTF8';" is mandatory.

    Everything is *working fine*. My output is in UTF-8, I'm 100% sure of
    it.

    DBD::mysql : 2.9007
    Perl : 5.8.7
    MySQL : 4.1.12-Debian_1ubuntu3.1-log
    DBI : 1.48

    (find your local versions with attached script versions.pl)

    I suspect that Paul's data stored in MySQL are not truely UTF-8. Maybe
    I miss the point, but it seems Perl, MySQL and UTF-8 are not working so
    badly altogether.

    The Inter Library Comparison (ILS) Chart

    http://wiki.koha.org/doku.php?id=inter_library_system_comparison

    星期六, 7月 15, 2006

    koha-2.3.0 bug-1

    ERROR 1062 at line 1: Duplicate entry 'localhost-root' for key 1
    256ERROR 1062 at line 1: Duplicate entry '%-Koha-root' for key 1


    read_config_file(/etc/koha.conf.tmp) returned undef at /usr/local/koha/intranet/ modules/C4/Context.pm line 195.
    Can't call method "config" on unblessed reference at /usr/local/koha/intranet/mo dules/C4/Context.pm line 488.
    Problem updating database...

    converts the binary MARC-8 records to MARCXML / UTF-8

    http://www.nntp.perl.org/group/perl.perl4lib/2369

    Hi everyone,

    Just providing an update on this issue. As you may recall, I've
    been putting the MARC::Record suite, specifically MARC::File::XML
    and MARC::Charset, through some fairly rigourous tests, including
    a 'roundtrip' test, which converts the binary MARC-8 records to
    MARCXML / UTF-8 and then back to binary MARC but encoded as UTF-8.
    This test is available here:

    http://liblime.com/public/roundtrip.pl

    I discovered a number of bugs or issues, not in the MARC::* stuff, but in the
    back-end SAX parsers. I'll just summarize my discoveries here for
    posterity:

    1. MARC::File::XML, if it encounteres unmapped encoding in a
    MARC-8 encoded binary MARC file (in as_xml()) will drop the entire
    subfield where the improper encoding exists. The simple solution is
    to always use: MARC::Charset->ignore_errors(1); if you expect your
    data will have improper encoding.

    2. the XML::SAX::PurePerl parser cannot properly handle combining
    characters. I've reported this bug here:

    http://rt.cpan.org/Public/Bug/Display.html?id=19543

    At the suggestion of several, I tried replacing my default system
    parser with expat, which cause another problem:

    3. handing valid UTF-8 encoded XML to new_from_xml() sometimes
    causes the entire record to be destroyed when using XML::SAX::Expat
    as the parser (with PurePerl these seem to work). It fails with
    the error:

    not well-formed (invalid token) at line 23, column 43, byte 937 at /usr/lib/perl5/XML/Parser.pm line 187

    I haven't been able to track the cause of this bug, I eventually
    found a workaround that didn't result in the above error, but instead,
    silently mangled the resulting binary MARC record on the way out:

    4. Using incompatible version of XML::SAX::LibXML and libxml2 will
    cause binary MARC records to be mangled when passed through new_from_xml()
    in some cases. The solution here is to make sure you're running
    compatible versions of XML::SAX::LibXML and libxml2. I run Debian
    Sarge and when I just used the package maintainer's versions it
    fixed the bug. It's unclear to me why the binary MARC would be
    mangled, this may indicate a problem with MARC::* but I haven't
    had time to track it down and since installing compatible versions
    of the parser back-end solves the problem I can only assume it's
    the fault of the incompatible parsers.

    Issues #3 and #4 above can be replicated following batch of records
    through the roundtrip.pl script above:

    http://liblime.com/public/several.mrc

    If you want to test #2, try running this record through roundtrip.pl:

    http://liblime.com/public/combiningchar.mrc

    BTW: you can change your default SAX parser by editing the .ini file ...
    mine is located in /usr/local/share/perl/5.8.4/XML/SAX/ParserDetails.ini

    So the bottom line is, if you want to use MARC::File::XML in any
    serious application, you've got to use compatible versions of the
    libxml2 parser and XML::SAX::LibXML. Check the README in the perl
    package for documentation on which are compatible...

    Maybe a note somewhere in the MARC::File::XML documentation to point
    these issues out would be useful. Also, it wouldn't be too bad to have
    a few tests to make sure that the system's default SAX parser is capable
    of handling these cases. Just my two cents.

    Cheers,

    --
    Joshua Ferraro VENDOR SERVICES FOR OPEN-SOURCE SOFTWARE
    President, Technology migration, training, maintenance, support
    LibLime Featuring Koha Open-Source ILS
    jmf[at]liblime.com |Full Demos at http://liblime.com/koha |1(888)KohaILS

    DB schema

    1.A logical schema diagram for 3.0 has been written by Paul. It’s a 2 page document. Avaible in some forms :openoffice.org [http://www.koha-fr.org/presentation/MCD_version3.odg 15KB], PDF [http://www.koha-fr.org/presentation/MCD_version3.pdf 230KB] It will be updated when needed (Paul will take care of the update. If he don’t, bug him)

    2.A logical schema diagram for 2.2.0 has been written. It’s a 2 page document. Avaible in some forms :openoffice.org [http://www.koha-fr.org/presentation/MCD_version2_2_0.sxd 15KB], PDF [http://www.koha-fr.org/presentation/MCD_version2_2_0.pdf 230KB]

    3.A logical schema diagram for 2.0.0 has been written. It’s a 2 page document. Avaible in some forms :openoffice.org [http://www.koha-fr.org/presentation/MCD2.sxd 15KB], PDF [http://www.koha-fr.org/presentation/MCD.pdf 230KB] and jpg [http://www.koha-fr.org/presentation/MCD1.jpg page 1, 160KB] and [http://www.koha-fr.org/presentation/MCD2.jpg page 2 170KB] Some draft ‘logical’ schema diagrams from 1.3.3 are [http://irref.mine.nu/user/dchud/koha-schema/ available here]

    ZebraSearchingDefinitions an explaination of the components of searching with the new ZOOM API, and a discussion of which cataloging procedures should

    The Koha Online Catalog: A Working Definition

    In versions of Koha prior to 2.4, the goal with Koha’s MARC support was to get a functioning ILS in place that was capable of storing MARC records correctly. But now we have a more ambitious goal: we want our ILS to be capable of searching the semantic information in MARC records to the fullest extent possible. A secondary goal is to provide easy access from the Online Catalog to resources that extend beyond just the bibliographic records for library holdings.

    This Wiki page provides a workspace where Koha developers, cataloging staff, and general staff can post ideas, requests, and questions for how Koha handles searching (and display) of bibliographic records and access to other resources.
    Scope

    There are many considerations in constructing a working definition of the Koha Catalog. Ultimately, our working definition will consist of individual goals. An example of a goal might be “I want to be able to search for an exact title like “It” for Stephen King, and have it be the first record in the result set”. To realize a given goal, we must define a set of practices in four areas:

    Search Indexes

    The indexes are where we define:

    *
    how MARC fields should be grouped together as ‘search points’ (eg, ‘author’, ‘date’, ‘exact title’ are search points)
    *
    what kinds of searches we can do on those grouping (eg, ‘number’ search, ‘phrase’ search)
    *
    how to search within certain fields for data (specific positions of fixed fields for instance)

    MARC Frameworks

    Koha’s MARC Frameworks are where we define:

    *
    what constitutes a MARC record (what fields/subfields)
    *
    labels for each field
    *
    how the fields are handled within the MARC editor
    *
    how the fields should be displayed in search results and details pages
    *
    a mapping between MARC records and Koha’s item management (issues, reserves, circ rules, barcodes, etc.)

    Cataloging

    Consistant cataloging practices are, together with Frameworks and Indexes, an essential component to searching. Here are some things to think about:

    *
    NPL employs ‘copy-cataloging’, not original cataloging, so records often come from different sources that may have different cataloging practices.
    *
    in areas where no official rule has been made in AACR2 or similar cataloging manuals, Koha will need a consistant practice in order to properly index records
    *
    with over 2000 edit points per record, we need to identify clearly which of those are most important for purposes of search and display

    Interface Design

    The Koha OPAC is an interface through which patrons and staff construct queries of the data. The interface needs to be fast, accurate, and intuitive to use if it is to be a useful search tool of the library’s collections.

    Our task then, is to construct a working set of expectations and definitions of the above. The definitions can then be applied directly to each of the four categories to realize a given search goal.
    Discussion Points
    Dates

    MARC records don’t have a consistant way to distinguish between copyright and publication dates (that I can tell), so we have two date types to think about: copyright/publication, and acquisition. Here are some related MARC fields for each:
    copyright/publication dates

    008 / 07-10 : generally a primary date associated with the
    publication, distribution, etc. of an item and the beginning
    date of a collection

    008 / 11-14 : secondary date associated with the publication
    distribution, etc. of an item and the ending date of a collection.
    For books and visual materials, this may be a detailed date which
    represents a month and day.

    260

    362

    *
    Index I propose to index the 008/07-10 field and make that the date field used for date searches
    *
    MARC Framework The framework should require that 008/07-10 be filled with values
    *
    Cataloging We need to make sure that all our records have values in the 008/07-10
    *
    Interface Design What ways do we want to be able to search on dates? in a range, individually?

    acquisition date

    942$k : stored as yyyymmddhhmmss

    Item Types, Circulation Rules, etc.

    For the Zebra version of Koha, we’re breaking up the itemtypes into four categories:

    1.
    collection code (the original itemtype)
    2.
    audience
    3.
    content
    4.
    format

    To do this, we are using a combination of several fields in the record to derive each category.

    Leader

    LDR/06 type of record

    FORMAT OF ITEM

    MARC Field: 007/1,2 (form of item)

    ta = everything else = 'regular print'
    tb = LP,LPNF,LP J, LP YA,LP JNF,LP YANF = 'large print'
    sd = CDM,AB,JAB,JABN,YAB,YABN,ABN, = 'sound disk'
    co = CDR = 'CD-ROM'
    vf = AV,AVJ,AVNF,AVJNF = 'VHS'
    vd = DVD,DVDN,DVDJ,DVJN = 'DVD'
    ss = JAC,YAC,AC,JACN,YACN,ACN = 'sound cassette'

    TARGET AUDIENCE

    MARC Field: 008/22 (target audience)
    a = EASY
    b = EASY
    c = J,JNF,JAB,JABN,AVJ,AVJNF,JAC,JACN (juvenile)
    d = YA,YANF,YAB,YABN,YAC,YACN (young adult)
    e = everything else (adult)
    j = J,JNF,JAB,JABN,AVJ,AVJNF,JAC,JACN,DVDJ,DVDJN (juvenile)

    CONTENT

    MARC Field: 008/33,34

    normal records:
    008 / 33 fiction/non-fiction
    008 / 34 biography
    (what about mystery ... are they are there any others?)

    video recordings: MARC Field 880/33
    v = videorecording

    008 / 34 l live action
    008 / 34 a animation
    008 / 34 c animation and live action

    sound recordings:
    008 / 30-31 a autobiography
    b biography
    d drama
    etc.
    AUDIO BOOKS
    LDR nim a 00
    008/ 30, 31
    Guidelines for applying content designators:

    Code: Description:
    # Item is a music sound recording When # is used, it is followed by
    another blank (##).
    a Autobiography
    b Biography
    c Conference proceedings
    d Drama
    e Essays
    f Fiction Fiction includes novels, short stories, etc.
    g Reporting Reports of news-worthy events and informative messages
    are included in this category.
    h History History includes historical narration, etc., that may also
    be covered by one of the other codes (e.g., historical poetry).
    i Instruction Instructional text includes instructions on how to
    accomplish a task, learn an art, etc. (e.g., how to replace a light
    switch). Note: Language instruction text is assigned code j.
    j Language instruction Language instructional text may include
    passages that fall under the definition for one of the other codes
    (e.g., language text that includes poetry).
    k Comedy Spoken comedy.
    l Lectures, speeches Literary text is lectures and/or speeches.
    m Memoirs Memoirs are usually autobiographical.
    n Not applicable Item is not a sound recording (e.g., printed or
    manuscript music).
    o Folktales
    p Poetry
    r Rehearsals Rehearsals are performances of any of a variety of
    nonmusical productions.
    s Sounds Sounds include nonmusical utterances and vocalizations that
    may or may not convey meaning.
    t Interviews
    z Other Type of literary text for which none of the other defined
    codes are appropriate.
    | No attempt to code

    MUSIC
    LDR njm a 00
    008 / 30,31 (usually blank)
    008 / 18,19 composition form

    Guidelines for applying content designators:

    Code: Description:
    an Anthems
    bd Ballads
    bt Ballets
    bg Bluegrass music
    bl Blues
    cn Canons and rounds i.e., compositions employing strict imitation
    throughout
    ct Cantatas
    cz Canzonas Instrumental music designated as a canzona.
    cr Carols
    ca Chaconnes
    cs Chance compositions
    cp Chansons, polyphonic
    cc Chant, Christian
    cb Chants, Other
    cl Chorale preludes
    ch Chorales
    cg Concerti grossi
    co Concertos
    cy Country music
    df Dance forms Includes music for individual dances except those that
    have separate codes defined: mazurkas, minuets, pavans, polonaises,
    and waltzes.
    dv Divertimentos, serenades, cassations, divertissements, and notturni
    Instrumental music designated as a divertimento, serenade, cassation,
    divertissement, or notturno.
    ft Fantasias Instrumental music designated as fantasia, fancies,
    fantasies, etc.
    fm Folk music Includes folk songs, etc.
    fg Fugues
    gm Gospel music
    hy Hymns
    jz Jazz
    md Madrigals
    mr Marches
    ms Masses
    mz Mazurkas
    mi Minuets
    mo Motets
    mp Motion picture music
    mc Musical revues and comedies
    mu Multiple forms
    nc Nocturnes
    nn Not applicable Indicates that form of composition is not applicable
    to the item. Used for any item that is a non-music sound recording.
    op Operas
    or Oratorios
    ov Overtures
    pt Part-songs
    ps Passacaglias Includes all types of ostinato basses.
    pm Passion music
    pv Pavans
    po Polonaises
    pp Popular music
    pr Preludes
    pg Program music
    rg Ragtime music
    rp Rhapsodies
    rq Requiems
    ri Ricercars
    rc Rock music
    rd Rondos
    sd Square dance music
    sn Sonatas
    sg Songs
    st Studies and exercises Used only when the work is intended for
    teaching purposes (usually entitled Studies, Etudes, etc.).
    su Suites
    sp Symphonic poems
    sy Symphonies
    tc Toccatas
    ts Trio-sonatas
    uu Unknown Indicates that the form of composition of an item is
    unknown. Used when the only indication given is the number of
    instruments and the medium of performance. No structure or genre is
    given, although they may be implied or understood.
    vr Variations
    wz Waltzes
    zz Other Indicates a form of composition for which none of the other
    defined codes are appropriate (e.g., villancicos, incidental music,
    electronic music, etc.).
    | No attempt to code

    *
    Index I propose that the above guidelines be used for indexing a record for its itemtype, format, audience, and content
    *
    MARC Framework The framework should require that the above fields be filled with values
    *
    Cataloging We need to make sure that all our records have appropriate values in the above fields
    *
    Interface Design need to make sure the interface is easy to use

    Organization of Materials

    This gets tricky. Please keep in mind that I haven’t had any formal library science training and the following is what I’ve gleaned by working with librarians from many different systems. Every library seems to handle these issues differently, but here are some definitions that I hope are universal:

    *
    Collection Code - used to specify circulation rules on a given record or item

    *
    Classification - a taxonomy for organizing a library collection into subjects

    *
    Shelving Location - the general location of an item within the library (general stacks, reference area, new books shelf, science fiction area, etc.)

    *
    Call Number - a standards-based scheme for organization of a given item on the shelf. Typically, the call number is composed of some part of the classification

    *
    Local Call Number - a locally-defined scheme for organizing items on the shelf.

    *
    Item Call Number - an item-specific call number, sometimes used to distinguish between two of the same item on the same shelf. Also used for inventory as a way to specify which shelf a given item is associated with.

    Libraries typically simplify the above elements to simplify record maintenance and searching of materials. For instance, NPL currently uses a simplified scheme that consists of the following:
    Name Use Composition Location
    Item Type general shelving location, circulation rules locally defined 942$c
    Call Number shelf order, subject classification from Dewey or locally defined 942$c

    For Koha 2.4, we’re proposing to change that scheme slightly to enable better search options in the catalog. Here is the scheme that we’re proposing:
    Name Use Composition Location
    Classification subject classification Dewey 082
    Collection Code (itemtype) circulation rules, general shelving location locally defined 942$c
    Call Number shelf order Local Call Number (fiction) or Classification (non-fiction) ?
    Local Call Number shelf order NPL’s local call number scheme ( ) 942$c
    Item Call Number inventory Call Number 952?

    Looking forward, we may want to adopt an even more complete scheme such as the following:
    Name Use Composition Location
    Classification subject classification Dewey 082
    Collection Code circulation rules locally defined 942$c
    Shelving Location Code location of item (new items, general stacks, mysteries and sci-fi, etc.) locally defined ?
    Call Number shelf order Local Call Number (fiction) or Classification (non-fiction) ?
    Local Call Number shelf order NPL’s local call number scheme ( ) 942$c
    Item Call Number inventory Call Number + some other identifier ?

    Here are some additional thoughts on the topic of Material Organization

    *
    There is currently crossover between itemtypes and call numbers but I think we can safely ignore it
    *
    Staff need search and sort by ‘Call Number’. A ‘Call Number Search’ is defined as:
    o
    search Classification
    o
    if not found, search ‘Local Call Number’
    o
    sort of this search point is based on which type of ‘call number’ the search was on
    *
    Sorting by call numbers outside of the context of a call number search will consist of sorting by number first, then by text
    *
    Item Call Numbers are required for inventory
    *
    NPL does not use shelving locations

    Display of Records

    Here is a list of requests I know about:

    *
    Volume Numbers (245$n) should be included in title display and search
    *
    Subjects should display in a semantically correct way

    ZebraProgrammerGuide Some useful information about managing Zebra for Koha

    http://wiki.koha.org/doku.php?id=zebraprogrammerguide

    Here are some commands that you may find useful if you’re managing a Zebra installation with Koha
    Counting Records

    You can find out how many records are in your database thusly:

    Z> base IR-Explain-1
    Z> form sutrs
    Z> f @attr exp1 1=1 databaseinfo
    Sent searchRequest.
    Received SearchResponse.
    Search was a success.
    Number of hits: 4, setno 1
    SearchResult-1: databaseinfo(4)
    records returned: 0
    Elapsed: 0.069880
    Z> s
    Sent presentRequest (1+1).
    Records: 1
    [IR-Explain-1]Record type: SUTRS
    explain:
    databaseInfo: DatabaseInfo
    commonInfo:
    dateAdded: 20020911101011
    dateChanged: 20020911101011
    languageCode: EN
    accessinfo:
    unitSystems:
    string: ISO
    attributeSetIds:
    oid: 1.2.840.10003.3.5
    oid: 1.2.840.10003.3.1
    oid: 1.2.840.10003.3.1000.81.2
    schemas:
    oid: 1.2.840.10003.13.2
    name: gils
    userFee: 0
    available: 1
    recordCount:
    recordCountActual: 48
    zebraInfo:
    recordBytes: 123562
    Elapsed: 0.068221
    Z> s
    Sent presentRequest (2+1).
    Records: 1
    [IR-Explain-1]Record type: SUTRS

    EncodingScratchPad Some notes on encoding and charsets

    http://wiki.koha.org/doku.php?id=encodingscratchpad

    Introduction

    For the versions prior to Koha 2.2.6, careful attention was not given to dealing with character sets correctly. This document attempts to raise awareness of character set issues so that Koha developers and administrators can understand how best to proceed with development as well as setup and configuration of Koha systems.
    MARC Records

    MARC21 records can ‘legally’ only have two encodings: MARC-8 or UTF-8. The encoding is set in position 9 of the leader (LEADER / 09). MARC-8 is not recognized in modern web browsers and since Koha is a web-based system, if you are using MARC21 records, the encoding MUST be UTF-8. This means that the records should be pre-processed before entering your Koha system (in whatever way they enter). Some of this is handled internally within Koha, but don’t leave it to chance: if you’re migrating MARC21 data into Koha expect to spend a significant amount of time to dealing with properly pre-processing and storing your data in Koha.

    Conversion from MARC-8 to UTF-8 for MARC21 records is handled in Koha with the MARC::* suite of Perl modules. There are significant issues with properly configuring your system (with the proper SAX parsers, etc.) and there are also some questions raised about whether this suite is handling all character set / encoding issues correctly. For some details, please refer to the following posts:

    http://www.nntp.perl.org/group/perl.perl4lib/2369

    http://lists.nongnu.org/archive/html/koha-devel/2006-07/msg00000.html

    One thing to remember is that LEADER / 09 is used in MARC::* to determine the encoding of a given record. This means that if it’s not set correctly, you will very likely mangle any records you are importing/exporting.
    System

    Be sure to set your system locales up correctly to use UTF-8. You can test your locale settings by running:

    $ locale

    or

    $ echo $LANG
    en_US.UTF-8

    If it’s not en_US.UTF-8 (or UTF-8 of your language), en_US means it’s configured for iso-8859-1/latin1. Be sure to reconfigure your locales. On Debian, you can configure locales thusly:

    $ sudo dpkg-reconfigure locales

    Then, you’ll need to quit your shell session and log back in again to check the default.

    NOTE: on some systems, the root user won't have locale set properly, use
    a non-root user when working with Koha and the 'sudo' command if you need
    elevated permissions

    Apache2

    Be sure to have these lines in your http.conf:

    AddCharset UTF-8 .utf8
    AddDefaultCharset UTF-8

    MySQL 4.1
    Server Configuration

    MySQL Version 4.1 is absolute minimum if you want to handle encoding correctly

    Please refer to the MySQL Manual Chapter 10: http://dev.mysql.com/doc/refman/4.1/en/charset.html

    You will probably have to edit your my.cnf to set some variables so that the server will use utf8 by default. Even standard packages like the one provided by Debian Sarge have the variables set to use latin1 by default. Make sure you have the following in your my.cnf:

    init-connect = 'SET NAMES utf8'
    character-set-server=utf8
    collation-server=utf8_general_ci

    Connect to mysql using a non-root user and type:

    show variables;

    NOTE: The root user won't show the variables correctly for reasons I haven't had time to
    investigate ... connect as the kohaadmin user to check the values.

    Check to make sure the following are set to utf8:

    | character_set_client | utf8 |
    | character_set_connection | utf8 |
    | character_set_database | utf8 |
    | character_set_results | utf8 |
    | character_set_server | utf8 |
    | character_set_system | utf8 |
    | character_sets_dir | /usr/share/mysql/charsets/ |
    | collation_connection | utf8_general_ci |
    | collation_database | utf8_general_ci |
    | collation_server | utf8_general_ci

    You must create your Koha database _after_ you set the character set defaults otherwise the database could be set to the wrong defaults

    If you are moving from a mysql 4.0 database to a 4.1, you need to pay special attention to how to properly deal with your charsets. If you are storing utf-8 data in mysql 4.0 but your table types are set to latin1, you will need to convert to blob or binary before changing the table type otherwise mysql will attempt a conversion and you will end up with double-encoded utf8:

    http://dev.mysql.com/doc/refman/4.1/en/charset-conversion.html

    Also, if you are using marc-8 encoded data in a latin1 type database you probably need to do the same thing, export your records from marc_subfield_table into a marc file (after converting to type blob), then process the file, changing everything to utf8, then change the table type in mysql, then re-import.
    Database Backups

    http://www.oreillynet.com/onlamp/blog/2006/01/turning_mysql_data_in_latin1_t.html

    http://textsnippets.com/posts/show/84 (probably not the best way)
    mysqldump
    mysqlhotcopy
    Perl

    Here are some links to good references for perl encoding issues:

    http://www.ahinea.com/en/tech/perl-unicode-struggle.html http://search.cpan.org/~jhi/perl-5.8.0/pod/perluniintro.pod
    DBI Module

    http://www.zackvision.com/weblog/2005/11/mt-unicode-mysql.html

    Movable Type uses the Perl modules DBI and DBD::mysql to
    access the MySQL database. And guess what? They don’t have
    any Unicode support. In fact, forget marking the UTF-8 flag
    properly, according to this, DBD::mysql doesn’t even preserve
    UTF-8 flag when it’s already there.

    Wait for Unicode support for DBI/DBD::mysql which might be a
    long time since nobody is sure if it should be provided by the
    database-independent interface DBI or by the MySQL driver DBD::mysql
    or both together in some way.

    Use decode_utf8 on every output from the database. This is not very easy to do.
    http://perldoc.perl.org/Encode.html#PERL-ENCODING-API

    Use a patch which blesses all database data (yes that includes the binary
    fields) as UTF-8 based on a flag you set when connecting to the database.
    http://lists.mysql.com/perl/3563 (one patch)
    http://dysphoria.net/2006/02/05/utf-8-a-go-go/ (another)
    http://perl7.ru/lib/UTF8DBI.pm

    Here’s one that seems to indicate that it’s best to grab DBI from CPAN:

    http://www.codecomments.com/archive237-2006-4-786695.html

    DBD::mysql will just pass
    everything through unaltered. So if you use UTF-8 as connection charset,
    you have to encode('utf-8', ...) all queries and parameters, unless you
    are sure that they are either plain ASCII or already have the UTF-8 bit
    set. And you will get raw UTF-8 strings back, which you have to decode()
    explicitely.

    However, I notice that on Debian Sarge (on which I did my testing),
    libdbd-mysql-perl depends on libmysqlclient12. So there may be a problem
    with mixing releases (The server is 4.1, but libmysqlclient12 belongs to
    4.0, which doesn't know about UTF-8).

    CGI Module

    Coming soon ...
    Opening Files

    Coming soon ...
    using bulkmarcimport

    Coming soon ...
    Zebra

    Coming soon ...

    InstallingZebraPlugin226 How to install the Zebra plugin for 2.2.6

    http://wiki.koha.org/doku.php?id=installingzebraplugin226

    Introduction

    Koha’s Zebra plugin is a new feature with 2.2.6 that allows an otherwise ordinary rel_2_2 Koha to use Zebra for bibliographic data storage, search and retrieval. Why you would want to integrate Koha and Zebra is a topic for another document. This guide assumes you’re sold on the idea, and already have some experience managing a Koha system. In it, we’ll walk through the process of:

    *
    configuring your system
    *
    symlinking your installation environment to a ‘dev-week’ CVS repository
    *
    making needed changes to your Koha MySQL database
    *
    installing, configuring, and starting Zebra
    *
    importing your data

    Before following this install document please refer to the “Installing Koha (2.2.6) on Debian Sarge” and the “Updating Koha” documents available from http://kohadocs.org. The assumption is that you’ve already got Koha 2.2.6 installed and a working knowledge of how to symlink a CVS working repository to your installation. If you don’t know what that means, DON’T PROCEED. The Zebra integration adds quite a bit of complexity to the installation and maintenance of Koha, so be warned.

    I also highly recommend you read over the Zebra docs at http://indexdata.dk/zebra if you’re going to be managing a Zebra installation.

    Finally, DO NOT perform these steps on a production system unless you have fully tested them on a test system and are comfortable with the process. Doing otherwise could lead to serious data and configuration loss. And of course, before doing anything, please back up your data.
    Preparing the server for Zebra
    Install Yaz, Zebra, Net::Z3950::ZOOM
    on Debian

    Put the following in your /etc/apt/sources.list

    # for Yaz Toolkit
    deb http://ftp.indexdata.dk/debian indexdata/sarge released
    deb-src http://ftp.indexdata.dk/debian indexdata/sarge released

    Now run

    # apt-get update && apt-get install idzebra

    (yaz will automatically be installed as it’s a dependency)

    Install the latest version of Net::Z3950::ZOOM from CPAN:

    # perl -MCPAN -e 'install Net::Z3950::ZOOM'

    On other systems

    Get latest zebra & yaz sources from : http://www.indexdata.com/yaz/ and http://www.indexdata.com/zebra/ Install Yaz:

    # tar xvfz yaz-version.tar.gz
    # cd yaz-version
    # ./configure
    # make
    # make install

    Then istall zebra :

    # tar xvfz idzebra-version.tar.gz
    # cd idzebra-version
    # ./configure
    # make
    # make install

    Install the latest version of Net::Z3950::ZOOM from CPAN:

    # perl -MCPAN -e 'install Net::Z3950::ZOOM'

    Prepare the filesystem

    Check out dev-week from CVS

    # cvs -z3 -d:pserver:anonymous@cvs.savannah.nongnu.org:/sources/koha export -r dev_week koha

    NOTE: This is not a 'check out' but an 'export' The main difference is that there are no CVS directories in the 'export'

    Symlink your Koha 2.2.6 install environment to the dev-week ‘working copy’ (see the ‘Updating Koha’ document for details)
    The zebraplugin directory

    In the dev-week Koha cvs repository you’ll fine a zebraplugin directory that contains all the files you’ll need to set up Zebra.
    etc

    Within the etc directory, you’ll find a koha.xml file that is a replacement for the koha.conf file in rel_2_2. This file is where you specify the location of many of the files in the zebraplugin directory. You’ll need to pick a directory structure that works with your configuration and edit the file accordingly. For instance, on my systems, I have a structure like the following:

    /koha
    |-- cvsrepos
    |-- etc
    |-- intranet
    |-- log
    |-- opac
    |-- utils
    `-- zebradb

    The default plugin koha.xml uses this directory structure as a point of reference (the etc and zebradb directory above correspond to the same directories in the kohaplugin directory).
    zebradb

    This directory contains the filesystem that will store all of Zebra’s indexes. The only file you should need to edit in the zebradb file structure is the kohalis file within biblios/tab. This file should contain the user/password specified in the koha.xml directive.

    Depending on your system you also may need to modify some idzebra directories. On my Mandriva, zebra parameters are in /usr/local/share/idzebra and not in /usr/local/idzebra. to check it,

    which zebraidx

    If the answer is

    /usr/local/bin/zebraidx

    then update zebra-biblios.cfg & zebra-authorities.cfg and modify the line

    profilePath:${srcdir:-.}:/usr/share/idzebra/tab/:/koha/zebraplugin/zebradb/biblios/tab/:${srcdir:-.}/tab/

    to

    profilePath:${srcdir:-.}:/usr/local/share/idzebra/tab/:/koha/zebraplugin/zebradb/biblios/tab/:${srcdir:-.}/tab/

    utils

    The utils directory contains the utilities you’ll need to perform the rest of the installation / upgrade, which brings us to ...
    Modify the SQL database

    Here are tasks you’ll want to perform whether or not this is a brand new Koha install:

    1.
    updatedatabase (using updatedatabase from rel_2_2)
    2.
    update to the latest bib framework
    3.
    convert_to_utf8.pl (from dev-week)

    If you’re migrating from a previous version of Koha (very likely) you’ll need to also do the following:

    1.
    run rebuild-nonmarc from dev_week if your framework has changed
    2.
    run missing090field.pl (from dev-week)
    3.
    run biblio_framework.sql from within the mysql monitor (from dev-week)
    4.
    run phrase_log.sql from within the mysql monitor (from dev-week)
    5.
    export your MARC records
    6.
    run them through a preprocess routine to convert to utf-8
    7.
    double-check again for missing 090 fields (very critical)

    Importing Data

    If you’re upgrading an existing Koha installation, your MySQL database already contains the record data, so all we need to do is import the bibliographic data into Zebra. We can do this thusly:

    # zebraidx -g iso2709 -c /koha/etc/zebra-biblios.cfg -d biblios update /path/to/records
    # zebraidx -g iso2709 -c /koha/etc/zebra-biblios.cfg -d biblios commit
    -g is for group, files with the same group have the same extension
    -c is where the config file is
    -d is the name of the biblioserver

    If you need to batch import records that don’t exist in your Koha installation, you can use bulkmarcimport as with rel_2_2:

    # cd /path/to/dev_week/repo/
    # export KOHA_CONF=/path/to/koha.xml
    # perl misc/migration_tools/bulkmarcimport /path/to/records.mrc

    Starting Zebra

    zebrasrv -f /koha/etc/koha.xml

    Yes, it’s that simple. :-)

    The old 2.2 RoadMapToMarc

    http://wiki.koha.org/doku.php?id=roadmaptomarc

    1.
    ToDoMARC : the complete ROADMAP, and where we are...
    2.
    WhatIsMarc : explains what is MARC
    3.
    MarcDBStructure : almost uptodate. Some indexes have been added, and a field or 2
    4.
    MarcKohaMap : how we map koha old-db fields to USMARC subfields.

    not updtodate, but useful :

    1.
    MarcOperation : how we will manage the differents MARC normas in koha.

    completly out of date :

    1.
    CataloguingAPI : see Biblio.pm instead (lot of comments at the beginning)
    2.
    WalktroughToMarc : see ToDoMARC instead

    ZOOMSearchBeta ZOOM Searching Beta Notes

    http://wiki.koha.org/doku.php?id=zoomsearchbeta

    Hi folks,

    Well, you knew it was coming, it’s been promised for, like, forever ... and now, it’s finally here! I’m proud to announce the beta version of the new Koha searching module that we’ve been raving about.

    Before I show you the link though, I must warn you, this is still a beta product, things might not work perfectly, and that’s because we’re still working on it. If something doesn’t look right, drop Owen or I a note and let us know (either on chat, or on the forum or via email – jmf@liblime.com is my current one).

    And now, without further ado, the link:

    http://zoomopac.liblime.com

    Let’s walk through the various search features of the new search:
    SIMPLE SEARCH

    The SIMPLE SEARCH page provides a simple, patron-friendly google-likeinterface to the catalog. Patrons can type simple, intuitive phrases like “harry potter” (titles) or “Chorale from Beethoven’s Symphony no. 9 " (song titles), “Neal Stephenson” (authors), etc.

    The SIMPLE SEARCH also exposes a very intuitive formal query language called the Common Command Language - CCL (this is actually an international query standard: ISO 8777). With CCL, you can do queries like:

    ti=cryptonomicon
    au=neal stephenson
    isbn=0380973464

    If you ever wonder how to use CCL, just click on the little [?] next to the search input box.

    For most queries, you can probably get away with just using the SIMPLE SEARCH, but sometimes ...
    ADVANCED SEARCH

    The ADVANCED SEARCH provides a guided interface to some ‘prefab’ search types like ‘author’, ‘title’, etc. For example, say you know the exact title of an item ... say it’s ‘It’ by Stephen King, and you want to find it in the catalog (pun intended). Try the ‘Exact Title’ option.The possibilities here are really endless ... if you want us to add a new search type, just drop Owen or I a note and we’ll make it happen.

    You’ll also notice something else about the Advanced Search, something we’re not quite finished with and could use some feedback on. Remember the old ‘Item Type’ limit? Ever try to find ‘all videos’ or ‘add DVDs’? It was pretty tough because those formats were in several places at the same time. Well now we’ve broken things into three categories:

    *
    Audience: (EASY, YA, Juvenille, Adult,etc.)
    *
    Content: (Fiction, non-Fiction, Biography,etc.)
    *
    Format: (Large Print, VHS, DVD,CD-ROM,etc.)

    The old ‘Item Type’ search is now re-labeled as ‘Collection Code’.

    Please try out these new options, let us know if they work as you expected, or if there are types missing, etc.
    POWER SEARCH

    The POWER SEARCH is, plainly put, for infomaniacs :-). It exposes the full syntax of the library-created Z39.50 protocol in all it’s glory: search attributes, boolean operators, index scanning, the whole deal.

    There are also two additional formal query syntax search boxes in the POWER SEARCH tab: CQL and PQF/RPN.
    PROXIMITY SEARCH

    The PROXIMITY SEARCH does one thing, and it does it well. It allows you to find works that are within a certain distance from each other in any fields listed in the drop-down box (let us know if you want others to be added).

    Well ... that’s a start ... There are lots more features to show off, and some to refine ... above all, we really need your feedback on this system so we can make sure it’s meeting everyone’s expectations.

    Ta ta for now,

    Joshua

    ZebraProtocolSupport Zebra Protocol Support (Z39.50, bib1, etc.)

    http://wiki.koha.org/doku.php?id=zebraprotocolsupport

    These attribute types are recognized regardless of attribute set. Some are +recognized for search, others for scan.

    Search

    Type Name Version 7 Embedded Sort 1.1 8 Term Set 1.1 9 Rank weight 1.1 9 Approx Limit 1.4 10 Term Ref 1.4

    Embedded Sort

    The embedded sort is a way to specify sort within a query - thus removing the +need to send a Sort Request separately. It is both faster and does not require +clients that deal with the Sort Facility.

    The value after attribute type 7 is 1=ascending, 2=descending.. The +attributes+term (APT) node is separate from the rest and must be @or’ed. The +term associated with APT is the level .. 0=primary sort, 1=secondary sort etc.. +Example:

    Search for water, sort by title (ascending):

    @or @attr 1=1016 water @attr 7=1 @attr 1=4 0

    Search for water, sort by title ascending, then date descending:

    @or @or @attr 1=1016 water @attr 7=1 @attr 1=4 0 @attr 7=2 @attr 1=30 1

    Term Set

    The Term Set feature is a facility that allows a search to store hitting terms +in a “pseudo” resultset; thus a search (as usual) + a scan-like facility. +Requires a client that can do named result sets since the search generates two +result sets. The value for attribute 8 is the name of a result set (string). +The terms in term set are returned as SUTRS records.

    Seach for u in title, right truncated.. Store result in result set named uset.

    @attr 5=1 @attr 1=4 @attr 8=uset u

    The model as one serious flaw.. We don’t know the size of term set.

    Rank weight

    Rank weight is a way to pass a value to a ranking algorithm - so that one APT +has one value - while another as a different one.

    Search for utah in title with weight 30 as well as any with weight 20.

    @attr 2=102 @or @attr 9=30 @attr 1=4 utah @attr 9=20 utah

    Approx Limit

    Newer Zebra versions normally estemiates hit count for every APT (leaf) in the +query tree. These hit counts are returned as part of the searchResult-1 +facility.

    By setting a limit for the APT we can make Zebra turn into approximate hit count +when a certain hit count limit is reached. A value of zero means exact hit +count.

    We are intersted in exact hit count for a, but for b we allow estimates for 1000 +and higher..

    @and a @attr 9=1000 b

    This facility clashes with rank weight! Fortunately this is a Zebra 1.4 thing so +we can change this without upsetting anybody!

    Term Ref

    Zebra supports the searchResult-1 facility.

    If attribute 10 is given, that specifies a subqueryId value returned as part of +the search result. It is a way for a client to name an APT part of a query.

    Scan

    Type Name Version 8 Result set narrow 1.3 9 Approx Limit 1.4

    Result set narrow

    If attribute 8 is given for scan, the value is the name of a result set. Each +hit count in scan is @and’ed with the result set given.

    Approx limit

    The approx (as for search) is a way to enable approx hit counts for scan hit +counts. However, it does NOT appear to work at the moment.

    Installing Koha on Ubuntu amd64

    http://wiki.koha.org/doku.php?id=ubuntu_amd64

    Generally to install Koha on Ubuntu Dapper Drake’s amd64 platform you can just follow the instructions at http://www.kohadocs.org/Installing_Koha_on_Debian_sarge.html, but I found a few differences while I was going through it.

    Note that I used the server edition rather than the desktop version.

    Before installing “Event” from CPAN, install the build-essential package via apt-get. This will allow Event to install without failing.

    Note that presently there are packages for the Yaz toolkit in Ubuntu universe, but they are a little old. If you’re just doing a plain install of Koha without updating to cvs or using Zebra, they’re probably fine, but if you’re planning on using the Zebra plugin then you should download the Yaz source tarball instead, and compile the packages yourself. (This is because installing Net::Z3950::ZOOM will fail because the version of Yaz is too old.)
    Generating Yaz deb packages from source

    The folks at Index Data have already done the package configuration for Yaz (and Zebra) so creating packages from the source is fairly simple:

    *
    Download the newest source tarball from ftp.indexdata.dk/pub/yaz and untar it.
    *
    Install fakeroot and debhelper with apt-get
    *
    run “dpkg-buildpackage -rfakeroot -b”
    *
    It will probably give you a list of dependent packages that are missing. Install them with apt-get, and repeat the last step.
    *
    Once the package has finished building, cd up a directory, where you should find your .deb packages.
    *
    Install them with “dpkg -i packagename“.

    If you are planning on installing zebra, you can follow the same procedure (downloading the idzebra tarball from /pub/zebra, of course).
    Other notes

    *
    cvs doesn’t appear to be installed by default on the server edition. You’ll have to apt-get it.
    *
    I also had to install XML::SAX, Class::Accessor, and Business::ISBN from CPAN. For the Zebra plugin, I also had to install XML::SAX::Expat, XML::Parser, and XML::Simple.

    星期日, 7月 02, 2006

    How do I go about maintaining a module when the author is unresponsive?

    Sometimes a module goes unmaintained for a while due to the author pursuing other interests, being busy, etc. and another person needs changes applied to that module and may become frustrated when their email goes unanswered. CPAN does not mediate or dictate a policy in this situation and rely on the respective authors to work out the details. If you treat other authors as you would like to be treated in the same situation the manner in which you go about dealing with such problems should be obvious.

    * Be courteous.
    * Be considerate.
    * Make an earnest attempt to contact the author.
    * Give it time. If you need changes made immediately, consider applying your patches to the current module, changing the version and requiring that version for your application. Eventually the author will turn up and apply your patches, offer you maintenance of the module or, if the author doesn't respond in a year, you may get maintenance by having interest.
    * If you need changes in order for another module or application to work, consider making the needed changes and bundling the new version with your own distribution and noting the change well in the documentation. Do not upload the new version under the same namespace to CPAN until the matter has been resolved with the author or CPAN.

    Simply keep in mind that you are dealing with a person who invested time and care into something. A little respect and courtesy go a long way.

    星期三, 6月 28, 2006

    error report

    > 1.
    > 整體使用度非常好
    > 基本的功能都有
    > 但還是有部份不完美需要處理
    > 如果未來可以做客制化更好
    > 例如學校需要的功能 班級分類 學號 座號~~~
    >
    > 2.
    > 讀者類別無法新增
    > template換成default即可新增!
    > 新增完成之後再換成npl
    > 有待後續更正
    > 已修正
    >
    > 3.
    >
    >
    作品語言、國別、編目規則代碼、...等下拉式選單應該要可以定義預設值
    > 或是符合台灣需要的預設值
    > 待解決
    >
    > 4.
    > itemtype代碼用中文error
    > itemtype代碼儘量用中文!
    > 如有需要修改要更動
    > biblioitems
    > itemtype
    > marc_subfield_table
    > marc_word
    > 紀錄分欄的資料庫(marc_word,
    > marc_subfield_table)一定要小心處理!
    >
    > 5.
    > 無法還書!
    > 如果沒有設定分館一定無法還書
    > 所以先去設定分館
    > 新增分館 koha管理->圖書館分館
    > 再去修正讀者資料(borrows)
    > UPDATE `borrowers` SET `branchcode` = '01'
    > 使其有分館資料!
    > errlog
    > [Thu Jun 22 15:18:13 2006] [error] [client
    > 163.22.52.56] Premature
    > end of script headers: returns.pl, referer:
    > http://163.22.52.4:8080/cgi-bin/koha/circ/returns.pl
    >
    > 6.
    > 讀者流通權限設定
    > koha管理->借閱規則->設定->7,3(7天3本)
    >
    > 7.
    > 刪除書目紀錄
    > 刪除館藏
    >
    >
    koha管理->編目->搜尋->編目複本館藏->已經存在的館藏->刪除
    > 刪除書目紀錄
    >
    > koha管理->編目->搜尋->MARC(頁面上方)->刪除紀錄
    > ps:會連館藏都刪除
    >
    > 8.
    > 無法調整館員權限
    > 因為userflag設定是中文的所以不行
    > 新的template已解決
    >
    > 9.
    > 統計
    > 管理->統計和報表->目錄統計->限制索書號
    > 沒有輸入form
    > 從 到 , 限制到 []
    > 特徵 <<也完全不懂意思
    > 終於搞清楚
    > 限制索書號的form就是"從 到 , 限制到 [] 特徵"
    > zh_TW/reports/catalogue_stats.tmpl
    > line 87 少了~~~ ,
    > 特徵可以換承階層也許比較清楚
    > 已解決, 可以限制分類號碼起始範圍與顯示階層
    >
    >
    > 10.
    > 流通
    > 館員權限設定為catalogue, circulate
    > 借書與還書都ok
    > 問題是還書時不知道還有哪些書沒有還
    > 必須到借書介面重新輸入讀者條碼!
    > 待解決
    >
    > 11.
    > 編目過程中
    > 在新增書目記錄時
    > 已在805段館藏紀錄紀錄了分類號、作者號、冊次
    > 新增到館藏時應該把索書號自動帶入
    > 避免大部份不需要的重複輸入
    > 待解決
    >
    > 12.
    > 流通續借的管理介面
    > 待討論:管理介面是否可以替讀者續借
    > >> 館員能不能續借對我而言不是重點
    > >> 我只要求要館員可續借功能與介面要正常
    > >> 我只要求要館員不可續借功能與介面也要正常
    > >> 不要介面上有續借選項但是又不能運作
    > 已修正
    >
    > 13.
    > 目錄統計
    > 無法統計時間範圍內新增的館藏
    > 待解決
    >
    > 14.
    > 查詢功能
    > 要手動輸入"*"
    > 檢索功能待改進
    > 待解決
    >
    > 15.
    > 介面翻譯已經很完整
    > 語意部份再加強即十分完整
    > 待解決
    >
    > 16.
    > 統計和報表->流通統計
    > 沒有輸出功能
    > 故只能由網頁列印
    > 建議增加輸出功能
    > 待解決
    >
    > 17.
    > 目錄統計
    > * 財產盤點
    > * 館藏目錄型式
    > 功能異常
    > 待解決
    >
    > 18.
    > 排行榜
    > * 最熱門的書籍
    > 功能異常
    > 待解決
    >
    > 19.
    > 沒有運作
    > * 沒有借閱過的讀者
    > * 沒有被借閱的書籍
    > 翻譯問題與功能異常
    > 待解決
    >
    > 20.
    > 借閱讀者類別
    > 應該使用下拉式選單
    > 待解決
    >
    > 21.
    > 平均借閱時間
    > 無法由網頁顯示
    > 待解決
    > 22.
    > 所有的時間按鈕在firefox出步來
    > 只能依照一定格式手動輸入 ex:"2006-06-25"
    > 待解決
    >
    > 23.
    > 左邊menu
    > 期刊位置怪怪的
    > 待解決
    >
    > 24.
    > 讀者沒有預設密碼
    > 同時也造成讀者預設無法自行續借
    > 但是館員又無法代替使用者續借
    > 所以必須由館員為讀者輸入密碼
    > 在由讀讀者自行登入opac續借
    > 這樣的介面似乎不友善
    > 待解決
    >
    > 25.
    > "購買建議"資料當館員處理過後
    > 使用者端應該看不到了(根據下方描述)
    > 但還是看的到
    > 是否系統有誤
    > 有待確認
    > 待處理
    >
    > 尚未完全測試完畢
    > 待後續繼續測試!
    > 期刊、採訪、流通逾期費用等功能尚未測試
    >
    > thomas
    >

    星期一, 6月 26, 2006

    barcode class

    Code 128B: 大小芵文數字
    EAN128B: 大小芵文數字
    Code 39: 小芵文數字
    Code 93: 小芵文數字
    Code 25 interleaved: 只能是數字
    EAN13: 13個數字, 最尾一個數字是核對數字
    EAN8: 8個數字, 最尾一個數字是核對數字
    UPC_A: 12個數字, 最尾一個數字是核對數字

    EAN(European Article Number,歐洲商品條碼)碼是西元 1977 年,由歐洲十二個工業國家所共同開發出來的一種條碼。目前已被數十個國家所使用,我國也在 1985 年取得 EAN 的會員國資格。EAN8 碼是 EAN 碼的另一種簡化型,它和 EAN13 碼比較起來,少了廠商代碼,而且產品代碼也縮短為4碼,加上一個檢查碼,總計8碼。

    「EAN13 及 EAN8 碼 條碼字型」讓你使用程式語言列印條碼(目前僅 VB、VC可以使用條碼列印),條碼字型分為雷射噴墨與點陣用兩個版本,僅提供第一種雷射噴墨版本。

    EAN13 碼及 EAN8 碼可使用的字元如下:0~9。因規則繁複(有興趣的可以到廠商網站查詢詳細規則),故附上測試檔,可以直接輸出,正式版中將提供專用的計算程式,試用版中 13 碼的後 6 碼,無法使用 1.5.9 三個數字。

    http://www.jengleng.com/new007.htm

    報表&館際互借功能

    1.報表功能,基本上應該朝向客製話發展
    2.館際互借功能(尚未測試),未來希望測試核心館和分館方式運行。(資源共享機制)

    koha opac 權限問題

    koha 系統的 opac 權限有一些問題,只要知道任何一名借書
    讀者帳號,隨便輸入密碼,即可看到目前借閱館藏,同時可以
    續借書籍以及管理使用虛擬書架功能,其他部份則無影響。

    這部份權限需要修正。

    虛擬書架

    虛擬書架功能,只有新增第一個書架 ok,之後無法新增。
    刪除書架需要刪除館藏,才能刪除館藏。Koha 提供的虛擬
    書架只能一個書架一本書。

    權限設定

    1.讀者權限不用設定。
    2.流通館員權限。
    a.circulate「館藏流通功能」
    b.catalogue「瀏覽目錄功能(圖書館員介面)」
    c.borrowers「新增修改讀者功能」
    d.reserveforothers「讀者預約館藏功能」
    e.borrow「借閱館藏功能」
    f.reserveforself「自行預約館藏功能」

    這部份功能仍需要修改,否則 borrowers 功能,可以修改讀者紀錄和密碼。

    星期四, 6月 22, 2006

    修改 itemtype

    biblioitems
    marc_subfield_table
    itemtype
    marc_word

    弓鞋國小
    713
    717

    opac 圖檔

    修改 /opt/koha/opac/htdocs/opac-tmpl/npl/zh_TW/images

    館藏類別檔案,如ㄧ般圖書
    圖檔修改成ㄧ般圖書.gif

    星期二, 6月 20, 2006

    修正

    修改 /intranet/cgi-bin/admin/authorised_values.pl

    line 181

    $row_data{delete}="$script_name?op=delete_confirm&searchfield=$searchfield&id=".$results->[$i]{'id'};

    modify

    $row_data{delete}="$script_name?op=delete_confirmed&searchfield=$searchfield&id=".$results->[$i]{'id'};

    修改 acqui.simple/addbiblio.tmpl

    line 69

  • 假如 複本, ">編輯以存在館藏 的記錄。


  • modify

  • 假如 複本, ">編輯已存在館藏的記錄。


  • http://lit184.lacc.fju.edu.tw:8080/cgi-bin/koha/admin/authorised_values.pl?op=delete_confirm&category=brs&id=41

    修改 acqui.simple/addbiblio.tmpl

    line 69

  • 假如 複本, ">編輯以存在館藏 的記錄。


  • modify

  • 假如 複本, ">編輯已存在館藏的記錄。
  • 星期二, 6月 13, 2006

    koha 正體中文安裝-2006/06/13

    1. 首先安裝 ubuntu 5.0 記得安裝 server,圖形也可以,下載網址 http://www.ubuntu.org.tw/

    參考:
    http://wiki.ubuntu.org.tw/index.php/Ubuntu510_tw
    http://140.136.81.145:9999/services/demo.pdf
    http://140.136.81.145:9999/services/demo.odp
    http://140.136.81.145:9999/services/demo.sxi

    2.安裝 ssh、apache、mysql、php

    使用 apt 前,請設定 /etc/apt/sources.list
    參考 http://apt.ubuntu.org.tw/sources.list.breezy

    #apt-get install ssh
    #apt-get install mysql-server

    設定 mysql passwd

    #mysqladmin -u root password '你的 mysql root 密碼'

    如果忘記 mysql 密碼。請先停止 mysql services
    #/etc/init.d/mysql stop
    接著下,skip-safe 模式
    #mysqld_safe --skip-grant-tables &
    重新設定 mysql root password
    #mysqladmin -u root flush-privileges password "新密碼"
    #apt-get install apache2

    為了使用 phpMyAdmin 管理 mysql

    #apt-get install php4

    安裝 mysql & apache module

    #apt-get install libapache2-mod-auth-mysql

    安裝 mysql & php4 的模組

    #apt-get install php4-mysql

    接著您要測試 php

    vim test.php



    如果看到 http://140.136.81.145:9999/test.php,所顯示,表示 php 安裝完成。
    接著您可以下載 phpMyAdmin

    或是到 http://www.phpmyadmin.net/home_page/index.php 下載

    設定 config.inc.php root password 即可使用

    當然你也可以使用 MySQL Administrator http://dev.mysql.com/downloads/administrator/

    3.接著安裝 koha 需要的軟體,針對 perl module 事前安裝一些
    軟體,perl 模組安裝才不會有問題。

    #apt-get install libxml2
    #apt-get install libxml2-dev
    #apt-get install libssl-dev

    接著安裝 z3950 需要的套件,如果你想用 zebra(koha 2.4 版本會用到)
    請參考
    http://lists.gnu.org/archive/html/koha-devel/2006-03/msg00012.html
    Installing and Configuring Koha's Zebra Plugin With Koha 2.4 http://www.kohadocs.org/Installing_Zebra_plugin.html

    libyaz - Z39.50 runtime libraries
    libyaz-dev - Z39.50 development files and header files
    yaz - Utility programs for Z39.50 toolkit
    yaz-doc - Documentation for YAZ

    一樣使用 apt 安裝

    #apt-get install libyaz
    #apt-get install libyaz-dev
    #apt-get install yaz
    #apt-get install yaz-doc
    #apt-get install libwrap-dev
    #apt-get install libdate-manip-perl
    #apt-get install libhtml-template-perl
    #apt-get install libmail-sendmail-perl
    #apt-get install make gcc lynx ncftp wget

    4.接著安裝 perl module
    # perl -MCPAN -e 'install "Bundle::KohaSupport"'

    中間只要設定 server 選擇台灣,建議使用 isu、tku,
    基本上,其他設定按 enter。

    最後系統會顯示 DBD::mysql 安裝有問題,這個
    module 沒安裝沒關係,Koha 一樣可以跑。安裝
    不起來是因為 mysql.config ,因為你不是用 tarball
    方式安裝,所以沒有 compile source code。

    #另外還要安裝
    perl -MCPAN -e 'install "Unicode::String"'
    perl -MCPAN -e 'install "Encode::HanExtra"'
    才能處理中文

    perl -MCPAN -e 'install "XML::SAX"'
    perl -MCPAN -e 'install "Class::Accessor"'
    perl -MCPAN -e 'install "Business::ISBN"'
    perl -MCPAN -e 'install "Net::LDAP"'
    perl -MCPAN -e 'install "PDF::API2"'

    利用 cvs 最新 MARC
    首先安裝 cvs
    #apt-get install cvs

    #cvs -z3 -d:pserver:anonymous@marcpm.cvs.sourceforge.net:/cvsroot/marcpm co -P marc-record

    #cvs -z3 -d:pserver:anonymous@marcpm.cvs.sourceforge.net:/cvsroot/marcpm co -P marc-charset

    #cvs -z3 -d:pserver:anonymous@marcpm.cvs.sourceforge.net:/cvsroot/marcpm co -P marc-lint

    #cvs -z3 -d:pserver:anonymous@marcpm.cvs.sourceforge.net:/cvsroot/marcpm co -P marc-xml

    #cd marc-record
    #perl Makefile.PL
    #make
    #make install

    5. 接著安裝 Koha 2.2.5 版本
    下載 http://download.savannah.nongnu.org/releases/koha/

    或是

    http://140.136.81.145:9999/koha-2.2.5.tar.gz

    apachce2 conf 路徑 /etc/apache2/apache2.conf

    記得安裝選擇 MARC21

    因為之後會使用我的 tmpl:

    opac 路徑請設定 /opt/koha/opac
    intranet 路徑請設定 /opt/koha/intranet

    #mkdir /var/log/koha
    #chmod -R 0777 /var/log/koha

    log 路徑請設定 /var/log/koha

    6.中文化-CGI

    首先修正 CGI.pm 下載 http://140.136.81.145:9999/CGI.pm
    覆蓋 /usr/share/perl/5.8.7/CGI.pm,因為 5.8.7 的 CGI.pm 有問題。

    7.Koha tmpl 中文壞

    接著下載

    http://140.136.81.145:9999/kohalanguagefiles/koha-20060620.tar.gz

    解開之後,可以把覆蓋原來安裝 的 Koha。

    8.資料庫

    刪除 Koha 所有TABLE,然後 insert 我的 sql.txt http://140.136.81.145:9999/kohasql/Koha.sql

    指令
    刪除Koha
    mysqladmin -u root -p drop Koha
    建Koha資料庫
    mysqladmin -u root -p create Koha
    匯入資料
    mysql -u root -p Koha < href="http://140.136.81.145:9999/koha/">http://140.136.81.145:9999/koha/ 內有 Kohademo.sql 資料裡有些書目紀錄和讀者。很適合 demo 用,Kohademo.sql 只有 z3950 設定,還有 marc 中文化,其餘資料都沒有。

    9.cvs 最新 Koha source code

    如果想用 cvs 直接下
    cvs -z3 -d:pserver:anonymous@cvs.savannah.nongnu.org:/sources/koha co koha

    要使用 cvs 當作最新的 code,一定要備份。

    10.因為使用使我的 tmpl,所以記得要修正 Koha-http.conf,否則 Koha 的 z3950 無法 work。

    可以把 z3950 直接寫進開機檔

    vim z3950

    #!/bin/sh

    /opt/koha/intranet/scripts/z3950daemon/z3950-daemon-launch.sh

    #chmod 0755 z3950

    #update-rc.d z3950 defaults

    系統會建立所有 runlevel link

    Adding system startup for /etc/init.d/z3950 ...
    /etc/rc0.d/K20z3950 -> ../init.d/z3950
    /etc/rc1.d/K20z3950 -> ../init.d/z3950
    /etc/rc6.d/K20z3950 -> ../init.d/z3950
    /etc/rc2.d/S20z3950 -> ../init.d/z3950
    /etc/rc3.d/S20z3950 -> ../init.d/z3950
    /etc/rc4.d/S20z3950 -> ../init.d/z3950
    /etc/rc5.d/S20z3950 -> ../init.d/z3950

    最後用 rcconf check 是否開啟 services
    #rcconf

    最後設定 /etc/apache2/apache2.conf
    Include /etc/koha-httpd.conf

    #/etc/init.d/apache2 restart

    打開 opac 網址和 intranet 網址測試。

    如果需要可以使用已經寫好的安裝script
    從安裝apache koha 中文化 一次搞定
    但目前只適用於 Ubuntu breezy, dapper

    Koha 系統帳號忘記話?看一下 /etc/koha.conf


    -------------------------------------------------
    koha-2.3.0
    perl -MCPAN -e 'install "LWP::Simple"'
    perl -MCPAN -e 'install "XML::Simple"'