mirror of
				https://codeberg.org/forgejo/forgejo.git
				synced 2025-10-31 14:31:02 +00:00 
			
		
		
		
	Refactor `modules/indexer` to make it more maintainable. And it can be
easier to support more features. I'm trying to solve some of issue
searching, this is a precursor to making functional changes.
Current supported engines and the index versions:
| engines | issues | code |
| - | - | - |
| db | Just a wrapper for database queries, doesn't need version | - |
| bleve | The version of index is **2** | The version of index is **6**
|
| elasticsearch | The old index has no version, will be treated as
version **0** in this PR | The version of index is **1** |
| meilisearch | The old index has no version, will be treated as version
**0** in this PR | - |
## Changes
### Split
Splited it into mutiple packages
```text
indexer
├── internal
│   ├── bleve
│   ├── db
│   ├── elasticsearch
│   └── meilisearch
├── code
│   ├── bleve
│   ├── elasticsearch
│   └── internal
└── issues
    ├── bleve
    ├── db
    ├── elasticsearch
    ├── internal
    └── meilisearch
```
- `indexer/interanal`: Internal shared package for indexer.
- `indexer/interanal/[engine]`: Internal shared package for each engine
(bleve/db/elasticsearch/meilisearch).
- `indexer/code`: Implementations for code indexer.
- `indexer/code/internal`: Internal shared package for code indexer.
- `indexer/code/[engine]`: Implementation via each engine for code
indexer.
- `indexer/issues`: Implementations for issues indexer.
### Deduplication
- Combine `Init/Ping/Close` for code indexer and issues indexer.
- ~Combine `issues.indexerHolder` and `code.wrappedIndexer` to
`internal.IndexHolder`.~ Remove it, use dummy indexer instead when the
indexer is not ready.
- Duplicate two copies of creating ES clients.
- Duplicate two copies of `indexerID()`.
### Enhancement
- [x] Support index version for elasticsearch issues indexer, the old
index without version will be treated as version 0.
- [x] Fix spell of `elastic_search/ElasticSearch`, it should be
`Elasticsearch`.
- [x] Improve versioning of ES index. We don't need `Aliases`:
- Gitea does't need aliases for "Zero Downtime" because it never delete
old indexes.
- The old code of issues indexer uses the orignal name to create issue
index, so it's tricky to convert it to an alias.
- [x] Support index version for meilisearch issues indexer, the old
index without version will be treated as version 0.
- [x] Do "ping" only when `Ping` has been called, don't ping
periodically and cache the status.
- [x] Support the context parameter whenever possible.
- [x] Fix outdated example config.
- [x] Give up the requeue logic of issues indexer: When indexing fails,
call Ping to check if it was caused by the engine being unavailable, and
only requeue the task if the engine is unavailable.
- It is fragile and tricky, could cause data losing (It did happen when
I was doing some tests for this PR). And it works for ES only.
- Just always requeue the failed task, if it caused by bad data, it's a
bug of Gitea which should be fixed.
---------
Co-authored-by: Giteabot <teabot@gitea.io>
		
	
			
		
			
				
	
	
		
			58 lines
		
	
	
	
		
			1.4 KiB
		
	
	
	
		
			Go
		
	
	
	
	
	
			
		
		
	
	
			58 lines
		
	
	
	
		
			1.4 KiB
		
	
	
	
		
			Go
		
	
	
	
	
	
| // Copyright 2021 The Gitea Authors. All rights reserved.
 | |
| // SPDX-License-Identifier: MIT
 | |
| 
 | |
| package bleve
 | |
| 
 | |
| import (
 | |
| 	"github.com/blevesearch/bleve/v2"
 | |
| )
 | |
| 
 | |
| // FlushingBatch is a batch of operations that automatically flushes to the
 | |
| // underlying index once it reaches a certain size.
 | |
| type FlushingBatch struct {
 | |
| 	maxBatchSize int
 | |
| 	batch        *bleve.Batch
 | |
| 	index        bleve.Index
 | |
| }
 | |
| 
 | |
| // NewFlushingBatch creates a new flushing batch for the specified index. Once
 | |
| // the number of operations in the batch reaches the specified limit, the batch
 | |
| // automatically flushes its operations to the index.
 | |
| func NewFlushingBatch(index bleve.Index, maxBatchSize int) *FlushingBatch {
 | |
| 	return &FlushingBatch{
 | |
| 		maxBatchSize: maxBatchSize,
 | |
| 		batch:        index.NewBatch(),
 | |
| 		index:        index,
 | |
| 	}
 | |
| }
 | |
| 
 | |
| // Index add a new index to batch
 | |
| func (b *FlushingBatch) Index(id string, data interface{}) error {
 | |
| 	if err := b.batch.Index(id, data); err != nil {
 | |
| 		return err
 | |
| 	}
 | |
| 	return b.flushIfFull()
 | |
| }
 | |
| 
 | |
| // Delete add a delete index to batch
 | |
| func (b *FlushingBatch) Delete(id string) error {
 | |
| 	b.batch.Delete(id)
 | |
| 	return b.flushIfFull()
 | |
| }
 | |
| 
 | |
| func (b *FlushingBatch) flushIfFull() error {
 | |
| 	if b.batch.Size() < b.maxBatchSize {
 | |
| 		return nil
 | |
| 	}
 | |
| 	return b.Flush()
 | |
| }
 | |
| 
 | |
| // Flush submit the batch and create a new one
 | |
| func (b *FlushingBatch) Flush() error {
 | |
| 	err := b.index.Batch(b.batch)
 | |
| 	if err != nil {
 | |
| 		return err
 | |
| 	}
 | |
| 	b.batch = b.index.NewBatch()
 | |
| 	return nil
 | |
| }
 |