SentenceTransformer based on BAAI/bge-small-en-v1.5

This is a sentence-transformers model finetuned from BAAI/bge-small-en-v1.5. It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: BAAI/bge-small-en-v1.5
  • Maximum Sequence Length: 512 tokens
  • Output Dimensionality: 384 dimensions
  • Similarity Function: Cosine Similarity

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel 
  (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
  (2): Normalize()
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("sucharush/bge_MNR")
# Run inference
sentences = [
    'Represent this question for retrieving relevant documents: Does low 25-Hydroxyvitamin D Level be Associated with Peripheral Arterial Disease in Type 2 Diabetes Patients?',
    'Patients with type 2 diabetes have an increased risk of atherosclerosis and vascular disease. Vitamin D deficiency is associated with vascular disease and is prevalent in diabetes patients. We undertook this study to determine the association between 25-hydroxyvitamin D (25[OH]D) levels and prevalence of peripheral arterial disease (PAD) in type 2 diabetes patients. A total of 1028 type 2 diabetes patients were recruited at Nanjing Medical University Affiliated Nanjing Hospital from November 2011 to October 2013. PAD was defined as an ankle-brachial index (ABI)\xa0<\xa00.9. Cardiovascular risk factors (blood pressure, HbA1c, lipid profile), comorbidities, carotid intima-media thickness (IMT) and 25(OH)D were assessed. Overall prevalence of PAD and of decreased 25(OH)D (<30\xa0ng/mL) were 20.1% (207/1028) and 54.6% (561/1028), respectively. PAD prevalence was higher in participants with decreased (23.9%) than in those with normal (15.6%) 25(OH)D (≥30\xa0ng/mL, p\xa0<0.01). Decreased 25(OH)D was associated with increased risk of PAD (odds ratio [OR], 1.69, 95% CI: 1.17-2.44, p\xa0<0.001) and PAD was significantly more likely to occur in participants ≥65\xa0years of age (OR, 2.56, 95% CI: 1.51 -4.48, vs. 1.21, 95% CI: 0.80-1.83, p-interaction\xa0=\xa00.027). After adjusting for known cardiovascular risk factors and potential confounding variables, the association of decreased 25(OH)D and PAD remained significant in patients <65\xa0years of age (OR, 1.55; 95% CI: 1.14-2.12, p\xa0=\xa00.006).',
    'Based on the information provided, we only know the number of patients who died within the first year after the surgery. To determine the probability of a patient surviving at least two years, we would need additional information about the number of patients who died in the second year or survived beyond that.\n\nWithout this information, it is not possible to calculate the probability of a patient surviving at least two years after the surgery.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Evaluation

Metrics

Logging

  • Dataset: ir-eval
  • Evaluated with main.LoggingEvaluator
Metric Value
cosine_accuracy@1 0.9241
cosine_accuracy@3 0.9788
cosine_accuracy@5 0.9906
cosine_accuracy@10 0.9965
cosine_precision@1 0.9241
cosine_precision@3 0.3263
cosine_precision@5 0.1981
cosine_recall@1 0.9241
cosine_recall@3 0.9788
cosine_recall@5 0.9906
cosine_ndcg@10 0.9635
cosine_mrr@10 0.9525
cosine_map@100 0.9526

Training Details

Training Dataset

Unnamed Dataset

  • Size: 98,112 training samples
  • Columns: sentence_0 and sentence_1
  • Approximate statistics based on the first 1000 samples:
    sentence_0 sentence_1
    type string string
    details
    • min: 18 tokens
    • mean: 55.27 tokens
    • max: 512 tokens
    • min: 9 tokens
    • mean: 317.52 tokens
    • max: 512 tokens
  • Samples:
    sentence_0 sentence_1
    Represent this question for retrieving relevant documents: Are elevated levels of pro-inflammatory oxylipins in older subjects normalized by flaxseed consumption? Oxylipins, including eicosanoids, are highly bioactive molecules endogenously produced from polyunsaturated fatty acids. Oxylipins play a key role in chronic disease progression. It is possible, but unknown, if oxylipin concentrations change with the consumption of functional foods or differ with subject age. Therefore, in a parallel comparator trial, 20 healthy individuals were recruited into a younger (19-28years) or older (45-64years) age group (n=10/group). Participants ingested one muffin/day containing 30g of milled flaxseed (6g alpha-linolenic acid) for 4weeks. Plasma oxylipins were isolated through solid phase extraction, analyzed with HPLC-MS/MS targeted lipidomics, and quantified with the stable isotope dilution method. At baseline, the older group exhibited 13 oxylipins ≥2-fold the concentration of the younger group. Specifically, pro-inflammatory oxylipins 5-hydroxyeicosatetraenoic acid, 9,10,13-trihydroxyoctadecenoic acid, and 9,12,13-trihydroxyoctadecenoic acid were signi...
    Represent this question for retrieving relevant documents: Find the isometries of the metric $ds^2 = dx^2 + dy^2$ over the rectangle $R=[0,a] \times [0,b]$, subject to the additional condition that any isometry $f$ maps $(0,0)$ to $(x_0, y_0)$. Find $x_0$ and $y_0$ such that the isometry $f$ is given by $f(x,y) = (x_0 + x, y_0 - y)$. An isometry is a transformation that preserves the distance between points. In this case, we are looking for transformations that preserve the metric $ds^2 = dx^2 + dy^2$. Let's consider the transformation $f(x,y) = (x_0 + x, y_0 - y)$ and find the conditions on $x_0$ and $y_0$ for it to be an isometry.

    First, let's compute the differential of the transformation:

    $$df = \begin{pmatrix} 1 & 0 \ 0 & -1 \end{pmatrix} \begin{pmatrix} dx \ dy \end{pmatrix} = \begin{pmatrix} dx \ -dy \end{pmatrix}$$

    Now, let's compute the metric under this transformation:

    $$ds'^2 = (dx')^2 + (dy')^2 = dx^2 + (-dy)^2 = dx^2 + dy^2$$

    Since $ds'^2 = ds^2$, the transformation $f(x,y) = (x_0 + x, y_0 - y)$ is an isometry.

    Now, let's find the conditions on $x_0$ and $y_0$ such that the isometry maps $(0,0)$ to $(x_0, y_0)$. Applying the transformation to $(0,0)$, we get:

    $$f(0,0) = (x_0 + 0, y_0 - 0) = (x_0, y_0)$$

    Since the transformation maps $(0,0)$ to $(x_0, y_0)$, there are no additional conditions...
    Represent this question for retrieving relevant documents: Do two di-leucine motifs regulate trafficking and function of mouse ASIC2a? Acid-sensing ion channels (ASICs) are proton-gated cation channels that mediate acid-induced responses in neurons. ASICs are important for mechanosensation, learning and memory, fear, pain, and neuronal injury. ASIC2a is widely expressed in the nervous system and modulates ASIC channel trafficking and activity in both central and peripheral systems. Here, to better understand mechanisms regulating ASIC2a, we searched for potential protein motifs that regulate ASIC2a trafficking.
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: steps
  • per_device_train_batch_size: 32
  • per_device_eval_batch_size: 32
  • num_train_epochs: 1
  • batch_sampler: no_duplicates
  • multi_dataset_batch_sampler: round_robin

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: steps
  • prediction_loss_only: True
  • per_device_train_batch_size: 32
  • per_device_eval_batch_size: 32
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 5e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1
  • num_train_epochs: 1
  • max_steps: -1
  • lr_scheduler_type: linear
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.0
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: False
  • fp16: False
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: False
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • tp_size: 0
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: None
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • include_for_metrics: []
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • use_liger_kernel: False
  • eval_use_gather_object: False
  • average_tokens_across_devices: False
  • prompts: None
  • batch_sampler: no_duplicates
  • multi_dataset_batch_sampler: round_robin

Training Logs

Epoch Step Training Loss ir-eval_cosine_ndcg@10
0.1631 500 0.021 0.9523
0.3262 1000 0.0069 0.9600
0.4892 1500 0.0051 0.9593
0.6523 2000 0.0055 0.9605
0.8154 2500 0.0053 0.9638
0.9785 3000 0.0056 0.9634
1.0 3066 - 0.9635

Framework Versions

  • Python: 3.12.8
  • Sentence Transformers: 3.4.1
  • Transformers: 4.51.3
  • PyTorch: 2.5.1+cu124
  • Accelerate: 1.3.0
  • Datasets: 3.2.0
  • Tokenizers: 0.21.0

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

MultipleNegativesRankingLoss

@misc{henderson2017efficient,
    title={Efficient Natural Language Response Suggestion for Smart Reply},
    author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
    year={2017},
    eprint={1705.00652},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}
Downloads last month
5
Safetensors
Model size
33.4M params
Tensor type
F16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for sucharush/bge_MNR

Finetuned
(277)
this model

Evaluation results