Nexevo Features
RLHF feedback closed loop
Collect thumbs up/down from user behavior and automatically feed it back to the routing system, allowing the self-learning algorithm to optimize future model selection.
python
from nexevo_ai import Nexevo
client = Nexevo()
resp = client.chat.completions.create(
model="nexevo/balanced",
messages=[{"role": "user", "content": "如何优化 SQL?"}],
)
gen_id = resp["nexevo"]["generation_id"] # Key: save this
# After the user clicks 👍/👎 on the UI:
client.feedback.submit(
generation_id=gen_id,
rating=1, # 1 = 👍, -1 = 👎
comment="very clear", # optional free text
tags=["accurate", "concise"], # Optional tags
)
# View summary in the background (7-day positive rating)
summary = client.feedback.summary(days=7)
print(f"Positive rating: {summary['positive_pct']}")
# Feedback goes directly to bandit + ELO, Nexevo self-learning routing meeting:
# - Improve the confidence of the praise model in this intent
# - Reduce the priority of the negative review model
# You don’t need to do any manual tuning, product quality will automatically improve based on user feedback.