该分词库使用很简单,先初试化该类
ChineseSegmenter seg = ChineseSegmenter.getGBSegmenter();
然后调用seg.segmentLine("要分词的中文段", " ")//第二个参数为分词之间以什么间隔
譬如 输出为儿童 节日 儿童节
下面简要说以下如何加到搜索代码里,
lucene 建立索引的代码引入的analysis分词法为 WhitespaceAnalyzer
import org.apache.lucene.analysis.WhitespaceAnalyzer;
IndexWriter writer = new IndexWriter(Directory, new WhitespaceAnalyzer(),true);
public void AddDocument(String Title , String Content , ..)
{
Document doc = new Document();
ChineseSegmenter cs= ChineseSegmenter.getGBSegmenter(); //初始化该类
doc.add(Field.Text("content", cs.segmentLine(Content, " "))); // 将分好的词写进索引
doc.add(Field.Text("title", cs.segmentLine(Title, " ")));
try
{
writer.addDocument(doc);
}
catch(IOException e)
{
System.out.println("wrong");
e.printStackTrace();
}
}
public void AddDocument(String Title , String Content , ..)
{
Document doc = new Document();
ChineseSegmenter cs= ChineseSegmenter.getGBSegmenter(); //初始化该类
doc.add(Field.Text("content", cs.segmentLine(Content, " "))); // 将分好的词写进索引
doc.add(Field.Text("title", cs.segmentLine(Title, " ")));
try
{
writer.addDocument(doc);
}
catch(IOException e)
{
System.out.println("wrong");
e.printStackTrace();
}
}
上面建立索引时候为把文章的标题和内容进行断词然后存入了索引, 搜索的时候同样用WhitespaceAnalyzer,
然后把结果hit中的结果合并就可以了.
public static void main(String[] args) throws Exception {
ChineseSegmenter seg = ChineseSegmenter.getGBSegmenter();
System.out.println(seg.segmentLine("儿童节日", " "));
}
ChineseSegmenter seg = ChineseSegmenter.getGBSegmenter();
System.out.println(seg.segmentLine("儿童节日", " "));
}
0 件のコメント:
コメントを投稿