Skip to content

API Endpoints

Complete reference for all Manuscript API endpoints.

http://localhost:8080

Manuscript does not require authentication by default. Add authentication via reverse proxy if needed.


Analyze content for AI generation.

Terminal window
curl -X POST http://localhost:8080/verify \
-H "Content-Type: application/json" \
-d '{"text": "Your content to analyze..."}'
Terminal window
curl -X POST http://localhost:8080/verify \
-F "image=@photo.jpg"
Terminal window
curl -X POST http://localhost:8080/verify \
-F "audio=@recording.mp3"
Terminal window
curl -X POST http://localhost:8080/verify \
-F "video=@clip.mp4"
ParameterTypeDefaultDescription
detailedbooleanfalseInclude full signal breakdown
{
"id": "hm_abc123def456",
"verdict": "human",
"confidence": 0.87,
"content_type": "text",
"signals": {
"sentence_variance": 0.42,
"vocabulary_richness": 0.78,
"contraction_ratio": 0.15
},
"processing_time_ms": 8
}
FieldTypeDescription
idstringUnique analysis identifier
verdictstring"human" or "ai"
confidencefloat0.0 to 1.0 confidence score
content_typestring"text", "image", "audio", or "video"
signalsobjectDetection signal breakdown
processing_time_msintegerProcessing time in milliseconds
{
"error": "content_too_large",
"message": "Text exceeds maximum length of 100000 characters",
"status": 400
}
StatusErrorDescription
400invalid_contentMissing or invalid content
400content_too_largeExceeds size limits
400unsupported_formatUnknown file format
500processing_errorInternal processing failed

Retrieve a previous analysis by ID.

Terminal window
curl http://localhost:8080/verify/hm_abc123def456

Same format as POST /verify response.

StatusErrorDescription
404not_foundAnalysis ID not found

Analyze multiple items in a single request.

Terminal window
curl -X POST http://localhost:8080/batch \
-H "Content-Type: application/json" \
-d '{
"items": [
{"text": "First text to analyze..."},
{"text": "Second text to analyze..."},
{"text": "Third text to analyze..."}
]
}'
{
"results": [
{
"id": "hm_batch_001",
"verdict": "human",
"confidence": 0.89,
"content_type": "text"
},
{
"id": "hm_batch_002",
"verdict": "ai",
"confidence": 0.92,
"content_type": "text"
},
{
"id": "hm_batch_003",
"verdict": "human",
"confidence": 0.76,
"content_type": "text"
}
],
"summary": {
"total": 3,
"human": 2,
"ai": 1,
"processing_time_ms": 45
}
}
  • Maximum 100 items per batch
  • Maximum 10MB total request size

Health check endpoint.

Terminal window
curl http://localhost:8080/health
{
"status": "healthy",
"version": "0.2.0",
"uptime_seconds": 3600
}
FieldTypeDescription
statusstring"healthy" or "unhealthy"
versionstringManuscript version
uptime_secondsintegerTime since start

Prometheus metrics endpoint.

Terminal window
curl http://localhost:8080/metrics
# HELP manuscript_requests_total Total requests by type
# TYPE manuscript_requests_total counter
manuscript_requests_total{type="text"} 1523
manuscript_requests_total{type="image"} 456
manuscript_requests_total{type="audio"} 89
manuscript_requests_total{type="video"} 12
# HELP manuscript_request_duration_seconds Request latency
# TYPE manuscript_request_duration_seconds histogram
manuscript_request_duration_seconds_bucket{le="0.01"} 1890
manuscript_request_duration_seconds_bucket{le="0.05"} 2034
manuscript_request_duration_seconds_bucket{le="0.1"} 2067
manuscript_request_duration_seconds_bucket{le="+Inf"} 2080
# HELP manuscript_active_requests Currently processing requests
# TYPE manuscript_active_requests gauge
manuscript_active_requests 3
MetricTypeDescription
manuscript_requests_totalCounterTotal requests by content type
manuscript_request_duration_secondsHistogramRequest latency distribution
manuscript_active_requestsGaugeCurrent in-flight requests
manuscript_detection_confidenceHistogramConfidence score distribution

Content TypeUsage
application/jsonJSON body with text or image_base64
multipart/form-dataFile uploads

All responses are application/json.


Manuscript does not implement rate limiting by default. Configure rate limiting at your reverse proxy level.

Example nginx configuration:

limit_req_zone $binary_remote_addr zone=manuscript:10m rate=100r/s;
location /verify {
limit_req zone=manuscript burst=200 nodelay;
proxy_pass http://manuscript:8080;
}

import requests
def verify_text(text):
response = requests.post(
"http://localhost:8080/verify",
json={"text": text}
)
return response.json()
result = verify_text("Your content here...")
print(f"Verdict: {result['verdict']} ({result['confidence']:.0%})")
async function verifyText(text) {
const response = await fetch("http://localhost:8080/verify", {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({ text }),
});
return response.json();
}
const result = await verifyText("Your content here...");
console.log(`Verdict: ${result.verdict} (${Math.round(result.confidence * 100)}%)`);
package main
import (
"bytes"
"encoding/json"
"net/http"
)
type VerifyRequest struct {
Text string `json:"text"`
}
type VerifyResponse struct {
ID string `json:"id"`
Verdict string `json:"verdict"`
Confidence float64 `json:"confidence"`
}
func verifyText(text string) (*VerifyResponse, error) {
body, _ := json.Marshal(VerifyRequest{Text: text})
resp, err := http.Post(
"http://localhost:8080/verify",
"application/json",
bytes.NewReader(body),
)
if err != nil {
return nil, err
}
defer resp.Body.Close()
var result VerifyResponse
json.NewDecoder(resp.Body).Decode(&result)
return &result, nil
}