In-Browser Python for Data Analysis: AI Coding Assistant for Scientific Research

Imagine you're in a meeting with collaborators, reviewing research data. Someone asks: "What if we subset the data by age group?" or "Can we see a correlation plot for these variables?"

With traditional workflows, you'd say: "Let me go back to my laptop, open Jupyter, run the code, screenshot the result, and email it to you." 30 minutes later, you're still setting up your environment.

What if you could run that analysis in 30 seconds instead? Right in your web browser. No Python installation. No Jupyter setup. No environment conflicts. Just instant data analysis wherever you have internet.

That's the power of in-browser Python for scientific research.

GetScholar integrates Pyodide (Python compiled to WebAssembly) with an AI coding assistant, enabling researchers to:

  • ✅ Run Python code directly in research documents
  • ✅ Analyze data without installing Python locally
  • ✅ Create visualizations alongside your writing
  • ✅ Get AI help writing analysis code
  • ✅ Share executable notebooks with collaborators
  • ✅ Reproduce analyses years later (no environment drift)

Why In-Browser Python Matters for Researchers

The Traditional Research Computing Workflow (Painful)

Most researchers face this fragmented workflow:

1. Collect data → Export to CSV
2. Open Jupyter Notebook or RStudio → Switch applications
3. Import CSV → Write code
4. Run analysis → Generate plots
5. Screenshot plots → Paste into Word document
6. Write interpretation → Switch back to document
7. Realize you need to adjust analysis → Go back to step 2
8. Repeat steps 2-7 multiple times

Problems:
❌ Data and interpretation are in different files
❌ Can't easily reproduce analysis months later
❌ Hard to share executable analysis with collaborators
❌ Environment setup breaks (Python versions, package conflicts)
❌ Context switching wastes time and breaks focus

The In-Browser Python Workflow (Seamless)

GetScholar enables a unified workflow:

1. Collect data → Paste into table or load from file
2. Write analysis code → In the same document as your writing
3. Run code → Click "Run" (3 seconds)
4. See results → Output appears immediately below code
5. Adjust and re-run → Instant feedback
6. Write interpretation → Right next to the code and results

Benefits:
✅ Code, data, results, and interpretation in one document
✅ Reproducible by default (code is embedded)
✅ Easy to share (send link, recipient can re-run)
✅ No environment setup (works on any device with browser)
✅ No context switching (everything in one place)

Real Example: Lab Meeting Scenario

Traditional Workflow:

Lab member: "Can we test if the effect differs by gender?"

You: "Good question. Let me check."
[Open laptop, find Jupyter file, load data, write code...]
[10 minutes later]
You: "OK, running now..."
[Code errors: package version conflict]
[5 more minutes fixing environment]
You: "Here are the results. Let me screenshot..."
[Email screenshot to team]

Total time: 20-25 minutes

GetScholar In-Browser Python:

Lab member: "Can we test if the effect differs by gender?"

You: [Opens GetScholar document on tablet]
[Adds code block below existing analysis]

import pandas as pd

# Subset data
male_data = df[df['gender'] == 'M']
female_data = df[df['gender'] == 'F']

# Compare means
print(f"Male mean: {male_data['outcome'].mean():.2f}")
print(f"Female mean: {female_data['outcome'].mean():.2f}")

# Statistical test
from scipy.stats import ttest_ind
t_stat, p_value = ttest_ind(male_data['outcome'],
                             female_data['outcome'])
print(f"t-test: t={t_stat:.2f}, p={p_value:.4f}")

[Click "Run"]
[Results appear in 3 seconds]

Output:
Male mean: 7.34
Female mean: 8.12
t-test: t=-2.45, p=0.0156

You: "Yes, significant difference. p=0.016."
[Everyone sees the code and results in real-time]

Total time: 2-3 minutes

Time saved: 90%

How GetScholar's In-Browser Python Works

Powered by Pyodide (WebAssembly Python)

Pyodide is CPython compiled to WebAssembly, allowing Python to run directly in your browser.

What this means:

  • No server required: Code runs on your device, not our servers
  • Privacy-preserving: Your data never leaves your browser
  • Fast execution: Compiled code runs at near-native speed
  • Full Python: Not a subset—real CPython 3.11+
  • Scientific libraries: NumPy, Pandas, Matplotlib, SciPy, Scikit-learn

Technical magic:

Traditional Python:
Your code → Python interpreter (on your computer) → Results

In-Browser Python (Pyodide):
Your code → Python interpreter (compiled to WebAssembly, runs in browser) → Results

Same Python. Different execution environment.

Included Scientific Libraries

GetScholar's Pyodide environment includes:

| Library | Use Cases | |---------|-----------| | NumPy | Numerical computing, array operations, linear algebra | | Pandas | Data manipulation, CSV/Excel import, DataFrames | | Matplotlib | Static plots (bar, scatter, line, histogram, etc.) | | Seaborn | Statistical visualizations, beautiful defaults | | SciPy | Statistical tests, signal processing, optimization | | Scikit-learn | Machine learning (regression, classification, clustering) | | Statsmodels | Advanced statistical models, time series analysis |

Example: One code block, multiple libraries:

import pandas as pd
import matplotlib.pyplot as plt
from scipy import stats

# Load data
data = pd.read_csv('survey_results.csv')

# Descriptive statistics
print(data.describe())

# Correlation
corr, p_val = stats.pearsonr(data['age'], data['score'])
print(f"Correlation: r={corr:.3f}, p={p_val:.4f}")

# Visualization
plt.scatter(data['age'], data['score'])
plt.xlabel('Age')
plt.ylabel('Score')
plt.title(f'Age vs Score (r={corr:.2f})')
plt.show()

All runs in your browser. No installation.

AI Coding Assistant

Don't know how to code your analysis? GetScholar's AI helps.

Example Interaction:

You: "I have two groups (treatment and control) with before/after measurements.
     I want to test if the treatment had an effect."

AI Assistant: "You need a repeated measures analysis. Here's the code:"

import pandas as pd
from scipy import stats

# Assuming your data is in a DataFrame:
# Columns: group (treatment/control), before, after, subject_id

# Calculate change scores
data['change'] = data['after'] - data['before']

# Subset by group
treatment = data[data['group'] == 'treatment']['change']
control = data[data['group'] == 'control']['change']

# Independent t-test on change scores
t_stat, p_value = stats.ttest_ind(treatment, control)

print(f"Treatment mean change: {treatment.mean():.2f}")
print(f"Control mean change: {control.mean():.2f}")
print(f"t-test: t={t_stat:.2f}, p={p_value:.4f}")

if p_value < 0.05:
    print("Significant difference detected.")
else:
    print("No significant difference.")

"Click Run to execute this analysis."

AI can help with:

  • Writing code from natural language descriptions
  • Debugging errors (explain what went wrong)
  • Suggesting appropriate statistical tests
  • Optimizing slow code
  • Explaining what code does (for learning)

Live Code Execution in Documents

Code blocks are embedded directly in your research documents.

Example: Results Section of a Paper

# Results

## Participant Demographics

A total of 120 participants were recruited (60 treatment, 60 control).
Mean age was 42.3 years (SD=8.7).

```python
import pandas as pd

# Load data
df = pd.read_csv('participants.csv')

# Summary statistics
print("Sample size:", len(df))
print("\nAge statistics:")
print(df['age'].describe())

print("\nGender distribution:")
print(df['gender'].value_counts())

Output: Sample size: 120

Age statistics: count 120.000000 mean 42.343333 std 8.712456 min 24.000000 max 68.000000

Gender distribution: F 67 M 53


## Primary Outcome Analysis

Treatment group showed significant improvement compared to control
(t=3.45, p=0.001).

```python
from scipy import stats

treatment = df[df['group']=='treatment']['outcome']
control = df[df['group']=='control']['outcome']

# t-test
t_stat, p_val = stats.ttest_ind(treatment, control)

print(f"Treatment: M={treatment.mean():.2f}, SD={treatment.std():.2f}")
print(f"Control: M={control.mean():.2f}, SD={control.std():.2f}")
print(f"t({len(treatment)+len(control)-2})={t_stat:.2f}, p={p_val:.4f}")

Output: Treatment: M=8.45, SD=1.23 Control: M=7.12, SD=1.45 t(118)=3.45, p=0.0010


Code, results, and interpretation **all in one document**.

Real-World Use Cases

Use Case 1: Data Exploration During Literature Review

Scenario: Reading a paper that reports results. Want to verify their statistics.

Traditional Workflow:

  • Screenshot the data table
  • Manually type numbers into Excel or Jupyter
  • Run analysis
  • Compare to paper's reported values
  • 15-20 minutes per paper

GetScholar Workflow:

Reading: Smith et al. (2023) - "Effects of Intervention X"

They report: Two groups (n=30 each), Group A: M=15.4, SD=2.3;
Group B: M=17.8, SD=2.1; p<0.01

Let me verify their t-test:

```python
import scipy.stats as stats

# Reconstruct approximate data from summary statistics
# (or request raw data from authors)

# For verification, use summary stats directly
from scipy.stats import ttest_ind_from_stats

t_stat, p_val = ttest_ind_from_stats(
    mean1=15.4, std1=2.3, nobs1=30,
    mean2=17.8, std2=2.1, nobs2=30
)

print(f"t-statistic: {t_stat:.3f}")
print(f"p-value: {p_val:.4f}")
print(f"Authors reported p<0.01: {p_val < 0.01}")

Output: t-statistic: -4.188 p-value: 0.0001 Authors reported p < 0.01: True


Analysis confirmed. Paper's statistics are correct.

**Time saved**: 5 minutes → 1 minute

Use Case 2: Quick Data Checks in Meetings

Scenario: Grant review meeting. Reviewer asks: "What was your power analysis for that sample size?"

Without In-Browser Python:

  • "I don't have those numbers with me."
  • "Let me get back to you after the meeting."
  • Opportunity to address concern is lost.

With GetScholar In-Browser Python:

from statsmodels.stats.power import ttest_power

# Calculate power for our design
power = ttest_power(effect_size=0.5,  # Medium effect
                    nobs=60,            # Our sample size per group
                    alpha=0.05)         # Significance level

print(f"Statistical power: {power:.2%}")

if power >= 0.80:
    print("✓ Adequate power (≥80%)")
else:
    print("⚠ Underpowered (<80%)")

Output: Statistical power: 82.14% ✓ Adequate power (≥80%)


"Yes, we have 82% power to detect a medium effect."
Reviewer satisfied. Grant moves forward.

Use Case 3: Reproducible Meta-Analysis

Scenario: Conducting meta-analysis of 15 studies.

Traditional Workflow:

  • Spreadsheet with extracted data
  • Separate R or Python script for analysis
  • Manual process to update if data changes
  • Hard for others to reproduce

GetScholar Workflow:

# Meta-Analysis: Intervention X for Outcome Y

## Included Studies

| Study | Year | N | Effect Size | SE |
|-------|------|---|-------------|-----|
| Smith | 2020 | 120 | 0.45 | 0.12 |
| Jones | 2021 | 95 | 0.38 | 0.15 |
| Lee | 2022 | 150 | 0.52 | 0.10 |
| ...   | ...  | ... | ...  | ... |

## Random-Effects Meta-Analysis

```python
import pandas as pd
import numpy as np
from scipy import stats
import matplotlib.pyplot as plt

# Data from table above
studies = pd.DataFrame({
    'author': ['Smith', 'Jones', 'Lee', ...],
    'year': [2020, 2021, 2022, ...],
    'n': [120, 95, 150, ...],
    'effect': [0.45, 0.38, 0.52, ...],
    'se': [0.12, 0.15, 0.10, ...]
})

# Calculate weights (inverse variance)
studies['weight'] = 1 / studies['se']**2

# Pooled effect size
pooled_effect = np.average(studies['effect'],
                           weights=studies['weight'])
pooled_se = np.sqrt(1 / studies['weight'].sum())

# 95% CI
ci_lower = pooled_effect - 1.96 * pooled_se
ci_upper = pooled_effect + 1.96 * pooled_se

print(f"Pooled effect: {pooled_effect:.3f}")
print(f"95% CI: [{ci_lower:.3f}, {ci_upper:.3f}]")

# Test for heterogeneity (Q statistic)
Q = ((studies['effect'] - pooled_effect)**2 * studies['weight']).sum()
df = len(studies) - 1
p_het = 1 - stats.chi2.cdf(Q, df)

print(f"\nHeterogeneity test:")
print(f"Q = {Q:.2f}, df = {df}, p = {p_het:.4f}")

# I² statistic
I2 = max(0, (Q - df) / Q) * 100
print(f"I² = {I2:.1f}%")

# Forest plot
fig, ax = plt.subplots(figsize=(10, 6))

y_pos = range(len(studies))
ax.errorbar(studies['effect'], y_pos,
            xerr=studies['se']*1.96,
            fmt='s', capsize=5, label='Individual studies')
ax.axvline(pooled_effect, color='red', linestyle='--',
           label=f'Pooled: {pooled_effect:.2f}')
ax.axvline(0, color='gray', linestyle='-', alpha=0.3)

ax.set_yticks(y_pos)
ax.set_yticklabels([f"{s['author']} ({s['year']})"
                    for _, s in studies.iterrows()])
ax.set_xlabel('Effect Size')
ax.set_title('Forest Plot: Meta-Analysis Results')
ax.legend()

plt.tight_layout()
plt.show()

[Forest plot appears here]

Conclusion

Pooled effect size: 0.47 (95% CI: [0.38, 0.56]) Heterogeneity: Q=12.34, p=0.136, I²=27.3% (low)

The intervention shows a moderate positive effect across studies.


**Benefits**:
- Code and data in same document
- Anyone can re-run analysis
- Update data → Re-run → Updated results automatically
- No separate analysis files to manage

Use Case 4: Teaching and Learning

Scenario: Teaching statistics to grad students.

Traditional Approach:

  • Students install Python (30 minutes of IT troubleshooting)
  • Distribute Jupyter notebooks
  • Students can't get packages to install
  • First hour of class wasted on setup

GetScholar Approach:

Share link to GetScholar document:
https://getscholar.app/stats-workshop

Students open link in browser → Everything works immediately

# Workshop: Introduction to Statistical Testing

## Exercise 1: t-test

You have data from two groups. Test if they differ significantly.

```python
import scipy.stats as stats

# Data
group_a = [23, 25, 22, 24, 26, 23, 25]
group_b = [30, 28, 29, 31, 30, 32, 29]

# Perform t-test
t_stat, p_value = stats.ttest_ind(group_a, group_b)

print(f"t-statistic: {t_stat:.3f}")
print(f"p-value: {p_value:.4f}")

if p_value < 0.05:
    print("Reject null hypothesis: Groups differ significantly")
else:
    print("Fail to reject null hypothesis")

[Students click Run and see results]

Exercise 2: Try it yourself

Modify the code above to test these two groups:

  • Group C: [15, 17, 16, 18, 15, 16]
  • Group D: [16, 18, 17, 19, 16, 18]

[Students edit code and run]


**Advantages**:
- Zero setup time
- Students focus on concepts, not installation
- Instructor can see students' code in real-time (shared workspace)
- Students can access materials from any device

Use Case 5: Collaborative Data Analysis

Scenario: Multi-site clinical trial. Three institutions analyzing shared data.

Traditional Workflow:

  • Email CSV files back and forth
  • Each site runs analysis in their local environment
  • Different Python versions, different package versions
  • Results don't match → Hours debugging environment differences

GetScholar Workflow:

Shared GetScholar workspace:
All three sites access same document with data and code

Site A (Boston) uploads data:
```python
import pandas as pd

# Upload data
data = pd.read_csv('trial_results.csv')
print(f"Total participants: {len(data)}")

Site B (Chicago) adds analysis:

# Efficacy analysis
treatment = data[data['arm']=='treatment']['outcome']
control = data[data['arm']=='control']['outcome']

from scipy.stats import ttest_ind
t, p = ttest_ind(treatment, control)
print(f"Primary outcome: t={t:.2f}, p={p:.4f}")

Site C (Los Angeles) adds safety analysis:

# Safety analysis
adverse_events = data.groupby('arm')['adverse_events'].sum()
print("Adverse events by arm:")
print(adverse_events)

All sites see same results. All using same Python environment. No version conflicts. No email attachments.


**Benefits**:
- Single source of truth
- Reproducible across institutions
- Real-time collaboration
- No environment setup per site

Advanced Features

1. Import Data from Multiple Sources

# From CSV file
import pandas as pd
data = pd.read_csv('data.csv')

# From Excel
data = pd.read_excel('data.xlsx', sheet_name='Sheet1')

# From pasted data
from io import StringIO
csv_string = """
age,score
25,85
30,90
28,88
"""
data = pd.read_csv(StringIO(csv_string))

# From API (if browser allows)
import requests
response = requests.get('https://api.example.com/data')
data = pd.DataFrame(response.json())

2. Interactive Visualizations

import matplotlib.pyplot as plt
import numpy as np

# Create interactive plot
fig, ax = plt.subplots(figsize=(10, 6))

# Data
x = np.linspace(0, 10, 100)
y = np.sin(x)

ax.plot(x, y, label='sin(x)')
ax.set_xlabel('x')
ax.set_ylabel('sin(x)')
ax.set_title('Interactive Plot')
ax.legend()
ax.grid(True)

plt.tight_layout()
plt.show()

Output appears as PNG image in document.

3. Machine Learning Models

from sklearn.linear_model import LinearRegression
from sklearn.model_selection import train_test_split
from sklearn.metrics import r2_score
import pandas as pd
import numpy as np

# Generate sample data
np.random.seed(42)
X = np.random.randn(100, 1)
y = 2 * X + 1 + np.random.randn(100, 1) * 0.1

# Train-test split
X_train, X_test, y_train, y_test = train_test_split(
    X, y, test_size=0.2, random_state=42
)

# Fit model
model = LinearRegression()
model.fit(X_train, y_train)

# Evaluate
y_pred = model.predict(X_test)
r2 = r2_score(y_test, y_pred)

print(f"Coefficient: {model.coef_[0][0]:.3f}")
print(f"Intercept: {model.intercept_[0]:.3f}")
print(f"R²: {r2:.3f}")

# Plot
import matplotlib.pyplot as plt
plt.scatter(X_test, y_test, label='Actual')
plt.plot(X_test, y_pred, color='red', label='Predicted')
plt.xlabel('X')
plt.ylabel('y')
plt.legend()
plt.show()

4. Time Series Analysis

import pandas as pd
import matplotlib.pyplot as plt
from statsmodels.tsa.seasonal import seasonal_decompose

# Sample time series data
dates = pd.date_range('2020-01-01', periods=100, freq='D')
values = np.random.randn(100).cumsum() + 50

data = pd.Series(values, index=dates)

# Decompose
decomposition = seasonal_decompose(data, model='additive', period=7)

# Plot components
fig, (ax1, ax2, ax3, ax4) = plt.subplots(4, 1, figsize=(10, 10))

decomposition.observed.plot(ax=ax1, title='Observed')
decomposition.trend.plot(ax=ax2, title='Trend')
decomposition.seasonal.plot(ax=ax3, title='Seasonal')
decomposition.resid.plot(ax=ax4, title='Residual')

plt.tight_layout()
plt.show()

5. Statistical Power Analysis

from statsmodels.stats.power import ttest_power
import numpy as np
import matplotlib.pyplot as plt

# Power analysis across sample sizes
sample_sizes = np.arange(10, 201, 10)
effect_sizes = [0.2, 0.5, 0.8]  # Small, medium, large

fig, ax = plt.subplots(figsize=(10, 6))

for effect in effect_sizes:
    powers = [ttest_power(effect, n, alpha=0.05)
              for n in sample_sizes]
    ax.plot(sample_sizes, powers,
            label=f'Effect size = {effect}')

ax.axhline(0.80, color='red', linestyle='--',
           label='80% power threshold')
ax.set_xlabel('Sample Size per Group')
ax.set_ylabel('Statistical Power')
ax.set_title('Power Analysis for t-test')
ax.legend()
ax.grid(True)

plt.tight_layout()
plt.show()

Limitations and Workarounds

Current Limitations

  1. Large datasets (>100 MB): Browser memory limited

    • Workaround: Sample data, or use summary statistics
  2. Some packages unavailable: Not all PyPI packages work in Pyodide

    • Workaround: Most scientific packages (NumPy, Pandas, SciPy, Scikit-learn) work
  3. GPU computation: No GPU access in browser

    • Workaround: Use cloud GPUs for deep learning, in-browser for analysis
  4. File I/O limitations: Can't access local filesystem directly

    • Workaround: Upload files to GetScholar, or paste data

What Works Well

Data analysis (Pandas, NumPy) ✅ Statistics (SciPy, Statsmodels) ✅ Visualization (Matplotlib, Seaborn) ✅ Machine learning (Scikit-learn, small models) ✅ Regression analysisMeta-analysisSurvey data analysisPower analysis

Comparison to Other Solutions

| Solution | Install? | Code+Writing | AI Help | Sharing | Offline | |----------|----------|--------------|---------|---------|---------| | Jupyter Notebook | ✅ Yes | ❌ Separate | ❌ No | ⚠️ GitHub | ✅ Yes | | Google Colab | ❌ No | ❌ Separate | ⚠️ Limited | ✅ Link | ❌ No | | RStudio Cloud | ❌ No | ⚠️ R Markdown | ❌ No | ✅ Link | ❌ No | | GetScholar | ❌ No | ✅ Integrated | ✅ Yes | ✅ Link | ⚠️ Limited |

GetScholar advantages:

  • No installation (vs Jupyter)
  • Integrated with writing (vs Colab/Jupyter)
  • AI coding assistant (unique)
  • Easier sharing than GitHub

When to use Jupyter/Colab instead:

  • Very large datasets (>1 GB)
  • GPU-intensive deep learning
  • Packages not available in Pyodide

Getting Started

Step 1: Open GetScholar

  1. Go to GetScholar
  2. Create or open a document
  3. No Python installation required

Step 2: Insert Code Block

  1. Click "Code" button or type ```python
  2. Write Python code
  3. Click "Run" or Ctrl+Enter

Step 3: See Results

Output appears below code block immediately.

Example: Your First Analysis

print("Hello from in-browser Python!")

import numpy as np
import matplotlib.pyplot as plt

# Generate data
x = np.linspace(0, 10, 100)
y = np.sin(x)

# Plot
plt.plot(x, y)
plt.xlabel('x')
plt.ylabel('sin(x)')
plt.title('My First In-Browser Plot')
plt.grid(True)
plt.show()

That's it. You're now running Python in your browser.

Frequently Asked Questions

Is in-browser Python as fast as local Python?

For most research analyses: Yes. Pyodide (WebAssembly Python) runs at 50-80% of native Python speed, which is fast enough for typical data analysis.

For large-scale computations (millions of rows, deep learning): Use local Python or cloud GPUs.

Can I use my own Python packages?

Pyodide includes 100+ popular packages (NumPy, Pandas, SciPy, Matplotlib, Scikit-learn, Statsmodels, etc.).

Additional pure-Python packages can be installed with micropip. Packages requiring C extensions may not work (yet).

Is my data secure?

Yes. All code execution happens in your browser, not on GetScholar's servers. Your data never leaves your device during computation.

Only when you save the document does content sync to GetScholar's servers (encrypted).

Can I work offline?

Limited. You can:

  • ✅ View documents offline
  • ✅ Edit code offline
  • ⚠️ Run code requires initial internet connection (to load Pyodide)
  • ⚠️ Once loaded, code runs offline for that session

What if I need R instead of Python?

Currently, GetScholar supports Python only. R support (via WebR) is on our roadmap.

For now, we recommend using GetScholar for Python and RStudio Cloud for R.

Can I export my code?

Yes. Export options:

  • Jupyter Notebook (.ipynb): Open in Jupyter
  • Python script (.py): Run in any Python environment
  • Markdown (.md): Code + results as Markdown
  • PDF: Code + results + writing as PDF

Does this work on mobile?

Yes, but:

  • ✅ View code and results: Works great
  • ⚠️ Edit code: Small screen makes typing hard
  • ⚠️ Run code: Works, but slower than desktop

Recommended: Use desktop/laptop for analysis, mobile for reviewing results.

Conclusion: Democratizing Data Analysis for Researchers

In-browser Python + AI coding assistant removes barriers between research questions and answers.

No more:

  • ❌ Waiting for IT to install Python
  • ❌ Fighting with package versions
  • ❌ Switching between code and writing
  • ❌ Screenshots of plots pasted into Word
  • ❌ "Works on my machine" reproducibility issues

Instead:

  • ✅ Instant access from any browser
  • ✅ Code and results live with your writing
  • ✅ AI helps you write analysis code
  • ✅ Collaborators can re-run your analysis
  • ✅ True reproducibility

Whether you're:

  • Analyzing survey data
  • Running meta-analyses
  • Teaching statistics
  • Conducting clinical trials
  • Verifying published results

GetScholar's in-browser Python makes scientific computing accessible to everyone.

Start analyzing data in your browser →


Related Reading: